00:00:00.000 Started by upstream project "autotest-nightly-lts" build number 1711 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2972 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/centos7-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.033 The recommended git tool is: git 00:00:00.033 using credential 00000000-0000-0000-0000-000000000002 00:00:00.034 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/centos7-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.055 Fetching changes from the remote Git repository 00:00:00.060 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.106 Using shallow fetch with depth 1 00:00:00.106 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.106 > git --version # timeout=10 00:00:00.133 > git --version # 'git version 2.39.2' 00:00:00.133 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.134 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.134 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.882 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.893 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.905 Checking out Revision d55dd09e9e6d4661df5d1073790609767cbcb60c (FETCH_HEAD) 00:00:03.905 > git config core.sparsecheckout # timeout=10 00:00:03.918 > git read-tree -mu HEAD # timeout=10 00:00:03.935 > git checkout -f d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=5 00:00:03.955 Commit message: "ansible/roles/custom_facts: Add subsystem info to VMDs' nvmes" 00:00:03.955 > git rev-list --no-walk d55dd09e9e6d4661df5d1073790609767cbcb60c # timeout=10 00:00:04.066 [Pipeline] Start of Pipeline 00:00:04.080 [Pipeline] library 00:00:04.081 Loading library shm_lib@master 00:00:04.082 Library shm_lib@master is cached. Copying from home. 00:00:04.095 [Pipeline] node 00:00:04.176 Running on VM-host-WFP1 in /var/jenkins/workspace/centos7-vg-autotest 00:00:04.178 [Pipeline] { 00:00:04.193 [Pipeline] catchError 00:00:04.195 [Pipeline] { 00:00:04.209 [Pipeline] wrap 00:00:04.220 [Pipeline] { 00:00:04.228 [Pipeline] stage 00:00:04.229 [Pipeline] { (Prologue) 00:00:04.249 [Pipeline] echo 00:00:04.251 Node: VM-host-WFP1 00:00:04.257 [Pipeline] cleanWs 00:00:04.266 [WS-CLEANUP] Deleting project workspace... 00:00:04.266 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.272 [WS-CLEANUP] done 00:00:04.484 [Pipeline] setCustomBuildProperty 00:00:04.556 [Pipeline] nodesByLabel 00:00:04.557 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.564 [Pipeline] httpRequest 00:00:04.567 HttpMethod: GET 00:00:04.568 URL: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:04.569 Sending request to url: http://10.211.164.101/packages/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:04.577 Response Code: HTTP/1.1 200 OK 00:00:04.577 Success: Status code 200 is in the accepted range: 200,404 00:00:04.578 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:05.849 [Pipeline] sh 00:00:06.126 + tar --no-same-owner -xf jbp_d55dd09e9e6d4661df5d1073790609767cbcb60c.tar.gz 00:00:06.142 [Pipeline] httpRequest 00:00:06.146 HttpMethod: GET 00:00:06.147 URL: http://10.211.164.101/packages/spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:00:06.147 Sending request to url: http://10.211.164.101/packages/spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:00:06.158 Response Code: HTTP/1.1 200 OK 00:00:06.158 Success: Status code 200 is in the accepted range: 200,404 00:00:06.159 Saving response body to /var/jenkins/workspace/centos7-vg-autotest/spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:00:37.090 [Pipeline] sh 00:00:37.368 + tar --no-same-owner -xf spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:00:39.918 [Pipeline] sh 00:00:40.195 + git -C spdk log --oneline -n5 00:00:40.195 3b33f4333 test/nvme/cuse: Fix typo 00:00:40.195 bf784f7a1 test/nvme: Set SEL only when the field is supported 00:00:40.195 a5153247d autopackage: Slurp spdk-ld-path while building against native DPDK 00:00:40.195 b14fb7292 autopackage: Cut number of make jobs in half under clang+LTO 00:00:40.195 1d70a0c9e configure: Hint compiler at what linker to use via -fuse-ld 00:00:40.212 [Pipeline] writeFile 00:00:40.227 [Pipeline] sh 00:00:40.506 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:40.516 [Pipeline] sh 00:00:40.793 + cat autorun-spdk.conf 00:00:40.793 SPDK_TEST_UNITTEST=1 00:00:40.793 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.793 SPDK_TEST_BLOCKDEV=1 00:00:40.793 SPDK_RUN_ASAN=1 00:00:40.793 SPDK_TEST_DAOS=1 00:00:40.793 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:40.798 RUN_NIGHTLY=1 00:00:40.801 [Pipeline] } 00:00:40.818 [Pipeline] // stage 00:00:40.833 [Pipeline] stage 00:00:40.835 [Pipeline] { (Run VM) 00:00:40.849 [Pipeline] sh 00:00:41.125 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:41.125 + echo 'Start stage prepare_nvme.sh' 00:00:41.125 Start stage prepare_nvme.sh 00:00:41.125 + [[ -n 2 ]] 00:00:41.125 + disk_prefix=ex2 00:00:41.125 + [[ -n /var/jenkins/workspace/centos7-vg-autotest ]] 00:00:41.125 + [[ -e /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf ]] 00:00:41.125 + source /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf 00:00:41.125 ++ SPDK_TEST_UNITTEST=1 00:00:41.125 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:41.125 ++ SPDK_TEST_BLOCKDEV=1 00:00:41.125 ++ SPDK_RUN_ASAN=1 00:00:41.125 ++ SPDK_TEST_DAOS=1 00:00:41.125 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:41.125 ++ RUN_NIGHTLY=1 00:00:41.125 + cd /var/jenkins/workspace/centos7-vg-autotest 00:00:41.125 + nvme_files=() 00:00:41.125 + declare -A nvme_files 00:00:41.125 + backend_dir=/var/lib/libvirt/images/backends 00:00:41.125 + nvme_files['nvme.img']=5G 00:00:41.125 + nvme_files['nvme-cmb.img']=5G 00:00:41.125 + nvme_files['nvme-multi0.img']=4G 00:00:41.125 + nvme_files['nvme-multi1.img']=4G 00:00:41.125 + nvme_files['nvme-multi2.img']=4G 00:00:41.125 + nvme_files['nvme-openstack.img']=8G 00:00:41.125 + nvme_files['nvme-zns.img']=5G 00:00:41.125 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:41.125 + (( SPDK_TEST_FTL == 1 )) 00:00:41.125 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:41.125 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:41.125 + for nvme in "${!nvme_files[@]}" 00:00:41.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:41.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:41.125 + for nvme in "${!nvme_files[@]}" 00:00:41.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:41.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:41.125 + for nvme in "${!nvme_files[@]}" 00:00:41.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:41.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:41.125 + for nvme in "${!nvme_files[@]}" 00:00:41.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:41.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:41.125 + for nvme in "${!nvme_files[@]}" 00:00:41.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:41.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:41.125 + for nvme in "${!nvme_files[@]}" 00:00:41.125 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:41.125 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:41.383 + for nvme in "${!nvme_files[@]}" 00:00:41.383 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:41.640 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:41.640 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:41.640 + echo 'End stage prepare_nvme.sh' 00:00:41.641 End stage prepare_nvme.sh 00:00:41.651 [Pipeline] sh 00:00:41.927 + DISTRO=centos7 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:41.927 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -H -a -v -f centos7 00:00:41.927 00:00:41.927 DIR=/var/jenkins/workspace/centos7-vg-autotest/spdk/scripts/vagrant 00:00:41.927 SPDK_DIR=/var/jenkins/workspace/centos7-vg-autotest/spdk 00:00:41.927 VAGRANT_TARGET=/var/jenkins/workspace/centos7-vg-autotest 00:00:41.927 HELP=0 00:00:41.927 DRY_RUN=0 00:00:41.927 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img, 00:00:41.928 NVME_DISKS_TYPE=nvme, 00:00:41.928 NVME_AUTO_CREATE=0 00:00:41.928 NVME_DISKS_NAMESPACES=, 00:00:41.928 NVME_CMB=, 00:00:41.928 NVME_PMR=, 00:00:41.928 NVME_ZNS=, 00:00:41.928 NVME_MS=, 00:00:41.928 NVME_FDP=, 00:00:41.928 SPDK_VAGRANT_DISTRO=centos7 00:00:41.928 SPDK_VAGRANT_VMCPU=10 00:00:41.928 SPDK_VAGRANT_VMRAM=12288 00:00:41.928 SPDK_VAGRANT_PROVIDER=libvirt 00:00:41.928 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:41.928 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:41.928 SPDK_OPENSTACK_NETWORK=0 00:00:41.928 VAGRANT_PACKAGE_BOX=0 00:00:41.928 VAGRANTFILE=/var/jenkins/workspace/centos7-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:41.928 FORCE_DISTRO=true 00:00:41.928 VAGRANT_BOX_VERSION= 00:00:41.928 EXTRA_VAGRANTFILES= 00:00:41.928 NIC_MODEL=e1000 00:00:41.928 00:00:41.928 mkdir: created directory '/var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt' 00:00:41.928 /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt /var/jenkins/workspace/centos7-vg-autotest 00:00:44.495 Bringing machine 'default' up with 'libvirt' provider... 00:00:45.907 ==> default: Creating image (snapshot of base box volume). 00:00:46.166 ==> default: Creating domain with the following settings... 00:00:46.166 ==> default: -- Name: centos7-7.8.2003-1711172311-2200_default_1713212848_768ee9fedf3b8c8220cc 00:00:46.166 ==> default: -- Domain type: kvm 00:00:46.166 ==> default: -- Cpus: 10 00:00:46.166 ==> default: -- Feature: acpi 00:00:46.166 ==> default: -- Feature: apic 00:00:46.166 ==> default: -- Feature: pae 00:00:46.166 ==> default: -- Memory: 12288M 00:00:46.166 ==> default: -- Memory Backing: hugepages: 00:00:46.166 ==> default: -- Management MAC: 00:00:46.166 ==> default: -- Loader: 00:00:46.166 ==> default: -- Nvram: 00:00:46.166 ==> default: -- Base box: spdk/centos7 00:00:46.166 ==> default: -- Storage pool: default 00:00:46.166 ==> default: -- Image: /var/lib/libvirt/images/centos7-7.8.2003-1711172311-2200_default_1713212848_768ee9fedf3b8c8220cc.img (20G) 00:00:46.166 ==> default: -- Volume Cache: default 00:00:46.166 ==> default: -- Kernel: 00:00:46.166 ==> default: -- Initrd: 00:00:46.166 ==> default: -- Graphics Type: vnc 00:00:46.166 ==> default: -- Graphics Port: -1 00:00:46.166 ==> default: -- Graphics IP: 127.0.0.1 00:00:46.166 ==> default: -- Graphics Password: Not defined 00:00:46.166 ==> default: -- Video Type: cirrus 00:00:46.166 ==> default: -- Video VRAM: 9216 00:00:46.166 ==> default: -- Sound Type: 00:00:46.166 ==> default: -- Keymap: en-us 00:00:46.166 ==> default: -- TPM Path: 00:00:46.166 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:46.166 ==> default: -- Command line args: 00:00:46.166 ==> default: -> value=-device, 00:00:46.166 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:46.166 ==> default: -> value=-drive, 00:00:46.166 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:46.166 ==> default: -> value=-device, 00:00:46.166 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:46.733 ==> default: Creating shared folders metadata... 00:00:46.733 ==> default: Starting domain. 00:00:48.111 ==> default: Waiting for domain to get an IP address... 00:01:00.364 ==> default: Waiting for SSH to become available... 00:01:02.268 ==> default: Configuring and enabling network interfaces... 00:01:07.539 default: SSH address: 192.168.121.234:22 00:01:07.539 default: SSH username: vagrant 00:01:07.539 default: SSH auth method: private key 00:01:08.925 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:18.915 ==> default: Mounting SSHFS shared folder... 00:01:19.852 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output => /home/vagrant/spdk_repo/output 00:01:19.852 ==> default: Checking Mount.. 00:01:20.790 ==> default: Folder Successfully Mounted! 00:01:20.790 ==> default: Running provisioner: file... 00:01:21.049 default: ~/.gitconfig => .gitconfig 00:01:21.308 00:01:21.308 SUCCESS! 00:01:21.308 00:01:21.308 cd to /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt and type "vagrant ssh" to use. 00:01:21.308 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:21.308 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt" to destroy all trace of vm. 00:01:21.308 00:01:21.317 [Pipeline] } 00:01:21.332 [Pipeline] // stage 00:01:21.339 [Pipeline] dir 00:01:21.339 Running in /var/jenkins/workspace/centos7-vg-autotest/centos7-libvirt 00:01:21.341 [Pipeline] { 00:01:21.352 [Pipeline] catchError 00:01:21.353 [Pipeline] { 00:01:21.368 [Pipeline] sh 00:01:21.649 + vagrant ssh-config --host vagrant 00:01:21.649 + sed+ -ne /^Host/,$p 00:01:21.649 tee ssh_conf 00:01:24.940 Host vagrant 00:01:24.940 HostName 192.168.121.234 00:01:24.940 User vagrant 00:01:24.940 Port 22 00:01:24.940 UserKnownHostsFile /dev/null 00:01:24.940 StrictHostKeyChecking no 00:01:24.940 PasswordAuthentication no 00:01:24.940 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-centos7/7.8.2003-1711172311-2200/libvirt/centos7 00:01:24.940 IdentitiesOnly yes 00:01:24.940 LogLevel FATAL 00:01:24.940 ForwardAgent yes 00:01:24.940 ForwardX11 yes 00:01:24.940 00:01:24.958 [Pipeline] withEnv 00:01:24.961 [Pipeline] { 00:01:24.979 [Pipeline] sh 00:01:25.261 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:25.261 source /etc/os-release 00:01:25.261 [[ -e /image.version ]] && img=$(< /image.version) 00:01:25.261 # Minimal, systemd-like check. 00:01:25.261 if [[ -e /.dockerenv ]]; then 00:01:25.261 # Clear garbage from the node's name: 00:01:25.261 # agt-er_autotest_547-896 -> autotest_547-896 00:01:25.261 # $HOSTNAME is the actual container id 00:01:25.261 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:25.261 if mountpoint -q /etc/hostname; then 00:01:25.261 # We can assume this is a mount from a host where container is running, 00:01:25.261 # so fetch its hostname to easily identify the target swarm worker. 00:01:25.261 container="$(< /etc/hostname) ($agent)" 00:01:25.261 else 00:01:25.261 # Fallback 00:01:25.261 container=$agent 00:01:25.261 fi 00:01:25.261 fi 00:01:25.261 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:25.261 00:01:25.274 [Pipeline] } 00:01:25.293 [Pipeline] // withEnv 00:01:25.302 [Pipeline] setCustomBuildProperty 00:01:25.318 [Pipeline] stage 00:01:25.321 [Pipeline] { (Tests) 00:01:25.340 [Pipeline] sh 00:01:25.621 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:25.636 [Pipeline] timeout 00:01:25.636 Timeout set to expire in 1 hr 0 min 00:01:25.638 [Pipeline] { 00:01:25.654 [Pipeline] sh 00:01:25.933 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:26.192 HEAD is now at 3b33f4333 test/nvme/cuse: Fix typo 00:01:26.205 [Pipeline] sh 00:01:26.487 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:26.487 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:26.502 [Pipeline] sh 00:01:26.784 + scp -F ssh_conf -r /var/jenkins/workspace/centos7-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:26.799 [Pipeline] sh 00:01:27.081 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:27.081 ++ readlink -f spdk_repo 00:01:27.081 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:27.081 + [[ -n /home/vagrant/spdk_repo ]] 00:01:27.081 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:27.081 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:27.081 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:27.081 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:27.081 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:27.081 + cd /home/vagrant/spdk_repo 00:01:27.081 + source /etc/os-release 00:01:27.081 ++ NAME='CentOS Linux' 00:01:27.081 ++ VERSION='7 (Core)' 00:01:27.081 ++ ID=centos 00:01:27.081 ++ ID_LIKE='rhel fedora' 00:01:27.081 ++ VERSION_ID=7 00:01:27.081 ++ PRETTY_NAME='CentOS Linux 7 (Core)' 00:01:27.081 ++ ANSI_COLOR='0;31' 00:01:27.081 ++ CPE_NAME=cpe:/o:centos:centos:7 00:01:27.081 ++ HOME_URL=https://www.centos.org/ 00:01:27.081 ++ BUG_REPORT_URL=https://bugs.centos.org/ 00:01:27.081 ++ CENTOS_MANTISBT_PROJECT=CentOS-7 00:01:27.081 ++ CENTOS_MANTISBT_PROJECT_VERSION=7 00:01:27.081 ++ REDHAT_SUPPORT_PRODUCT=centos 00:01:27.081 ++ REDHAT_SUPPORT_PRODUCT_VERSION=7 00:01:27.081 + uname -a 00:01:27.081 Linux centos7-cloud-1711172311-2200 3.10.0-1160.114.2.el7.x86_64 #1 SMP Wed Mar 20 15:54:52 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:27.081 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:27.081 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:27.370 Hugepages 00:01:27.370 node hugesize free / total 00:01:27.370 node0 1048576kB 0 / 0 00:01:27.370 node0 2048kB 0 / 0 00:01:27.370 00:01:27.370 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:27.370 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:27.370 NVMe 0000:00:06.0 1b36 0010 0 nvme nvme0 nvme0n1 00:01:27.370 + rm -f /tmp/spdk-ld-path 00:01:27.370 + source autorun-spdk.conf 00:01:27.370 ++ SPDK_TEST_UNITTEST=1 00:01:27.370 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.370 ++ SPDK_TEST_BLOCKDEV=1 00:01:27.370 ++ SPDK_RUN_ASAN=1 00:01:27.370 ++ SPDK_TEST_DAOS=1 00:01:27.370 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.370 ++ RUN_NIGHTLY=1 00:01:27.370 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:27.370 + [[ -n '' ]] 00:01:27.370 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:27.370 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:27.370 + for M in /var/spdk/build-*-manifest.txt 00:01:27.370 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:27.370 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.370 + for M in /var/spdk/build-*-manifest.txt 00:01:27.370 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:27.370 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:27.370 ++ uname 00:01:27.370 + [[ Linux == \L\i\n\u\x ]] 00:01:27.370 + sudo dmesg -T 00:01:27.641 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:27.641 + sudo dmesg --clear 00:01:27.641 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:27.641 + dmesg_pid=2913 00:01:27.641 + [[ CentOS Linux == FreeBSD ]] 00:01:27.641 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.641 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:27.641 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:27.641 + sudo dmesg -Tw 00:01:27.641 + [[ -x /usr/src/fio-static/fio ]] 00:01:27.641 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:27.641 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:27.641 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:27.641 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:27.641 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:27.641 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:27.641 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:27.641 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:27.641 Test configuration: 00:01:27.641 SPDK_TEST_UNITTEST=1 00:01:27.641 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.641 SPDK_TEST_BLOCKDEV=1 00:01:27.641 SPDK_RUN_ASAN=1 00:01:27.641 SPDK_TEST_DAOS=1 00:01:27.641 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:27.641 RUN_NIGHTLY=1 20:28:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:27.641 20:28:10 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:27.641 20:28:10 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:27.641 20:28:10 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:27.641 20:28:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:27.641 20:28:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:27.641 20:28:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:27.641 20:28:10 -- paths/export.sh@5 -- $ export PATH 00:01:27.641 20:28:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:27.641 20:28:10 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:27.641 20:28:10 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:27.641 20:28:10 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713212890.XXXXXX 00:01:27.641 20:28:10 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713212890.8poRWP 00:01:27.641 20:28:10 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:27.641 20:28:10 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:27.641 20:28:10 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:27.641 20:28:10 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:27.641 20:28:10 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:27.641 20:28:10 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:27.641 20:28:10 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:27.641 20:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.641 20:28:10 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos' 00:01:27.641 20:28:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.641 20:28:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.641 20:28:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:27.641 20:28:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.641 Mon Apr 15 20:28:10 UTC 2024 00:01:27.641 20:28:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.641 LTS-20-g3b33f4333 00:01:27.641 20:28:10 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:27.641 20:28:10 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:27.641 20:28:10 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:27.641 20:28:10 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:27.641 20:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.641 ************************************ 00:01:27.641 START TEST asan 00:01:27.641 ************************************ 00:01:27.641 using asan 00:01:27.641 20:28:10 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:27.641 00:01:27.641 real 0m0.000s 00:01:27.641 user 0m0.000s 00:01:27.641 sys 0m0.000s 00:01:27.641 20:28:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:27.641 20:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.641 ************************************ 00:01:27.641 END TEST asan 00:01:27.641 ************************************ 00:01:27.641 20:28:10 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:01:27.641 20:28:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.641 20:28:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.641 20:28:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.641 20:28:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.641 20:28:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.641 20:28:10 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:27.641 20:28:10 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:27.641 20:28:10 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:01:27.641 20:28:10 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:27.641 20:28:10 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:27.641 20:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.901 ************************************ 00:01:27.901 START TEST unittest_build 00:01:27.901 ************************************ 00:01:27.901 20:28:10 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:01:27.901 20:28:10 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos --without-shared 00:01:27.901 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:27.901 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:28.159 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:01:28.159 Using 'verbs' RDMA provider 00:01:29.093 WARNING: ISA-L & DPDK crypto cannot be used as nasm ver must be 2.14 or newer. 00:01:29.093 Without ISA-L, there is no software support for crypto or compression, 00:01:29.093 so these features will be disabled. 00:01:29.093 Creating mk/config.mk...done. 00:01:29.093 Creating mk/cc.flags.mk...done. 00:01:29.093 Type 'make' to build. 00:01:29.093 20:28:12 -- common/autobuild_common.sh@403 -- $ make -j10 00:01:29.352 make[1]: Nothing to be done for 'all'. 00:01:34.627 The Meson build system 00:01:34.627 Version: 0.61.5 00:01:34.627 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:34.627 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:34.627 Build type: native build 00:01:34.627 Program cat found: YES (/bin/cat) 00:01:34.627 Project name: DPDK 00:01:34.627 Project version: 23.11.0 00:01:34.627 C compiler for the host machine: cc (gcc 10.2.1 "cc (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)") 00:01:34.627 C linker for the host machine: cc ld.bfd 2.35-5 00:01:34.627 Host machine cpu family: x86_64 00:01:34.627 Host machine cpu: x86_64 00:01:34.627 Message: ## Building in Developer Mode ## 00:01:34.627 Program pkg-config found: YES (/bin/pkg-config) 00:01:34.627 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:34.627 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:34.627 Program python3 found: YES (/usr/bin/python3) 00:01:34.627 Program cat found: YES (/bin/cat) 00:01:34.627 Compiler for C supports arguments -march=native: YES 00:01:34.627 Checking for size of "void *" : 8 00:01:34.627 Checking for size of "void *" : 8 00:01:34.627 Library m found: YES 00:01:34.627 Library numa found: YES 00:01:34.627 Has header "numaif.h" : YES 00:01:34.627 Library fdt found: NO 00:01:34.627 Library execinfo found: NO 00:01:34.627 Has header "execinfo.h" : YES 00:01:34.627 Found pkg-config: /bin/pkg-config (0.27.1) 00:01:34.627 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:34.627 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:34.627 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:34.627 Run-time dependency openssl found: YES 1.0.2k 00:01:34.627 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:34.627 Library pcap found: NO 00:01:34.627 Compiler for C supports arguments -Wcast-qual: YES 00:01:34.627 Compiler for C supports arguments -Wdeprecated: YES 00:01:34.627 Compiler for C supports arguments -Wformat: YES 00:01:34.627 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:34.627 Compiler for C supports arguments -Wformat-security: NO 00:01:34.627 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:34.627 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:34.627 Compiler for C supports arguments -Wnested-externs: YES 00:01:34.627 Compiler for C supports arguments -Wold-style-definition: YES 00:01:34.627 Compiler for C supports arguments -Wpointer-arith: YES 00:01:34.627 Compiler for C supports arguments -Wsign-compare: YES 00:01:34.627 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:34.627 Compiler for C supports arguments -Wundef: YES 00:01:34.627 Compiler for C supports arguments -Wwrite-strings: YES 00:01:34.627 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:34.627 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:34.627 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:34.627 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:34.627 Program objdump found: YES (/bin/objdump) 00:01:34.627 Compiler for C supports arguments -mavx512f: YES 00:01:34.627 Checking if "AVX512 checking" compiles: YES 00:01:34.627 Fetching value of define "__SSE4_2__" : 1 00:01:34.627 Fetching value of define "__AES__" : 1 00:01:34.627 Fetching value of define "__AVX__" : 1 00:01:34.627 Fetching value of define "__AVX2__" : 1 00:01:34.627 Fetching value of define "__AVX512BW__" : 1 00:01:34.627 Fetching value of define "__AVX512CD__" : 1 00:01:34.627 Fetching value of define "__AVX512DQ__" : 1 00:01:34.627 Fetching value of define "__AVX512F__" : 1 00:01:34.627 Fetching value of define "__AVX512VL__" : 1 00:01:34.627 Fetching value of define "__PCLMUL__" : 1 00:01:34.627 Fetching value of define "__RDRND__" : 1 00:01:34.627 Fetching value of define "__RDSEED__" : 1 00:01:34.627 Fetching value of define "__VPCLMULQDQ__" : 00:01:34.627 Fetching value of define "__znver1__" : 00:01:34.627 Fetching value of define "__znver2__" : 00:01:34.627 Fetching value of define "__znver3__" : 00:01:34.627 Fetching value of define "__znver4__" : 00:01:34.627 Library asan found: YES 00:01:34.627 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:34.627 Message: lib/log: Defining dependency "log" 00:01:34.627 Message: lib/kvargs: Defining dependency "kvargs" 00:01:34.627 Message: lib/telemetry: Defining dependency "telemetry" 00:01:34.627 Library rt found: YES 00:01:34.627 Checking for function "getentropy" : NO 00:01:34.627 Message: lib/eal: Defining dependency "eal" 00:01:34.627 Message: lib/ring: Defining dependency "ring" 00:01:34.627 Message: lib/rcu: Defining dependency "rcu" 00:01:34.627 Message: lib/mempool: Defining dependency "mempool" 00:01:34.627 Message: lib/mbuf: Defining dependency "mbuf" 00:01:34.627 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:34.627 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:34.627 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:36.007 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:36.007 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:36.007 Fetching value of define "__VPCLMULQDQ__" : (cached) 00:01:36.007 Compiler for C supports arguments -mpclmul: YES 00:01:36.007 Compiler for C supports arguments -maes: YES 00:01:36.007 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.007 Compiler for C supports arguments -mavx512bw: YES 00:01:36.007 Compiler for C supports arguments -mavx512dq: YES 00:01:36.007 Compiler for C supports arguments -mavx512vl: YES 00:01:36.007 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.007 Compiler for C supports arguments -mavx2: YES 00:01:36.007 Compiler for C supports arguments -mavx: YES 00:01:36.007 Message: lib/net: Defining dependency "net" 00:01:36.007 Message: lib/meter: Defining dependency "meter" 00:01:36.007 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.007 Message: lib/pci: Defining dependency "pci" 00:01:36.007 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.007 Message: lib/hash: Defining dependency "hash" 00:01:36.007 Message: lib/timer: Defining dependency "timer" 00:01:36.007 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.007 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.007 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.007 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.007 Message: lib/power: Defining dependency "power" 00:01:36.007 Message: lib/reorder: Defining dependency "reorder" 00:01:36.007 Message: lib/security: Defining dependency "security" 00:01:36.007 Has header "linux/userfaultfd.h" : YES 00:01:36.007 Has header "linux/vduse.h" : NO 00:01:36.007 Message: lib/vhost: Defining dependency "vhost" 00:01:36.008 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.008 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.008 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.008 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.008 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:36.008 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:36.008 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:36.008 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:36.008 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:36.008 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:36.008 Program doxygen found: YES (/bin/doxygen) 00:01:36.008 Configuring doxy-api-html.conf using configuration 00:01:36.008 Configuring doxy-api-man.conf using configuration 00:01:36.008 Program mandb found: YES (/bin/mandb) 00:01:36.008 Program sphinx-build found: NO 00:01:36.008 Configuring rte_build_config.h using configuration 00:01:36.008 Message: 00:01:36.008 ================= 00:01:36.008 Applications Enabled 00:01:36.008 ================= 00:01:36.008 00:01:36.008 apps: 00:01:36.008 00:01:36.008 00:01:36.008 Message: 00:01:36.008 ================= 00:01:36.008 Libraries Enabled 00:01:36.008 ================= 00:01:36.008 00:01:36.008 libs: 00:01:36.008 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:36.008 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:36.008 cryptodev, dmadev, power, reorder, security, vhost, 00:01:36.008 00:01:36.008 Message: 00:01:36.008 =============== 00:01:36.008 Drivers Enabled 00:01:36.008 =============== 00:01:36.008 00:01:36.008 common: 00:01:36.008 00:01:36.008 bus: 00:01:36.008 pci, vdev, 00:01:36.008 mempool: 00:01:36.008 ring, 00:01:36.008 dma: 00:01:36.008 00:01:36.008 net: 00:01:36.008 00:01:36.008 crypto: 00:01:36.008 00:01:36.008 compress: 00:01:36.008 00:01:36.008 vdpa: 00:01:36.008 00:01:36.008 00:01:36.008 Message: 00:01:36.008 ================= 00:01:36.008 Content Skipped 00:01:36.008 ================= 00:01:36.008 00:01:36.008 apps: 00:01:36.008 dumpcap: explicitly disabled via build config 00:01:36.008 graph: explicitly disabled via build config 00:01:36.008 pdump: explicitly disabled via build config 00:01:36.008 proc-info: explicitly disabled via build config 00:01:36.008 test-acl: explicitly disabled via build config 00:01:36.008 test-bbdev: explicitly disabled via build config 00:01:36.008 test-cmdline: explicitly disabled via build config 00:01:36.008 test-compress-perf: explicitly disabled via build config 00:01:36.008 test-crypto-perf: explicitly disabled via build config 00:01:36.008 test-dma-perf: explicitly disabled via build config 00:01:36.008 test-eventdev: explicitly disabled via build config 00:01:36.008 test-fib: explicitly disabled via build config 00:01:36.008 test-flow-perf: explicitly disabled via build config 00:01:36.008 test-gpudev: explicitly disabled via build config 00:01:36.008 test-mldev: explicitly disabled via build config 00:01:36.008 test-pipeline: explicitly disabled via build config 00:01:36.008 test-pmd: explicitly disabled via build config 00:01:36.008 test-regex: explicitly disabled via build config 00:01:36.008 test-sad: explicitly disabled via build config 00:01:36.008 test-security-perf: explicitly disabled via build config 00:01:36.008 00:01:36.008 libs: 00:01:36.008 metrics: explicitly disabled via build config 00:01:36.008 acl: explicitly disabled via build config 00:01:36.008 bbdev: explicitly disabled via build config 00:01:36.008 bitratestats: explicitly disabled via build config 00:01:36.008 bpf: explicitly disabled via build config 00:01:36.008 cfgfile: explicitly disabled via build config 00:01:36.008 distributor: explicitly disabled via build config 00:01:36.008 efd: explicitly disabled via build config 00:01:36.008 eventdev: explicitly disabled via build config 00:01:36.008 dispatcher: explicitly disabled via build config 00:01:36.008 gpudev: explicitly disabled via build config 00:01:36.008 gro: explicitly disabled via build config 00:01:36.008 gso: explicitly disabled via build config 00:01:36.008 ip_frag: explicitly disabled via build config 00:01:36.008 jobstats: explicitly disabled via build config 00:01:36.008 latencystats: explicitly disabled via build config 00:01:36.008 lpm: explicitly disabled via build config 00:01:36.008 member: explicitly disabled via build config 00:01:36.008 pcapng: explicitly disabled via build config 00:01:36.008 rawdev: explicitly disabled via build config 00:01:36.008 regexdev: explicitly disabled via build config 00:01:36.008 mldev: explicitly disabled via build config 00:01:36.008 rib: explicitly disabled via build config 00:01:36.008 sched: explicitly disabled via build config 00:01:36.008 stack: explicitly disabled via build config 00:01:36.008 ipsec: explicitly disabled via build config 00:01:36.008 pdcp: explicitly disabled via build config 00:01:36.008 fib: explicitly disabled via build config 00:01:36.008 port: explicitly disabled via build config 00:01:36.008 pdump: explicitly disabled via build config 00:01:36.008 table: explicitly disabled via build config 00:01:36.008 pipeline: explicitly disabled via build config 00:01:36.008 graph: explicitly disabled via build config 00:01:36.008 node: explicitly disabled via build config 00:01:36.008 00:01:36.008 drivers: 00:01:36.008 common/cpt: not in enabled drivers build config 00:01:36.008 common/dpaax: not in enabled drivers build config 00:01:36.008 common/iavf: not in enabled drivers build config 00:01:36.008 common/idpf: not in enabled drivers build config 00:01:36.008 common/mvep: not in enabled drivers build config 00:01:36.008 common/octeontx: not in enabled drivers build config 00:01:36.008 bus/auxiliary: not in enabled drivers build config 00:01:36.008 bus/cdx: not in enabled drivers build config 00:01:36.008 bus/dpaa: not in enabled drivers build config 00:01:36.008 bus/fslmc: not in enabled drivers build config 00:01:36.008 bus/ifpga: not in enabled drivers build config 00:01:36.008 bus/platform: not in enabled drivers build config 00:01:36.008 bus/vmbus: not in enabled drivers build config 00:01:36.008 common/cnxk: not in enabled drivers build config 00:01:36.008 common/mlx5: not in enabled drivers build config 00:01:36.008 common/nfp: not in enabled drivers build config 00:01:36.008 common/qat: not in enabled drivers build config 00:01:36.008 common/sfc_efx: not in enabled drivers build config 00:01:36.008 mempool/bucket: not in enabled drivers build config 00:01:36.008 mempool/cnxk: not in enabled drivers build config 00:01:36.008 mempool/dpaa: not in enabled drivers build config 00:01:36.008 mempool/dpaa2: not in enabled drivers build config 00:01:36.008 mempool/octeontx: not in enabled drivers build config 00:01:36.008 mempool/stack: not in enabled drivers build config 00:01:36.008 dma/cnxk: not in enabled drivers build config 00:01:36.008 dma/dpaa: not in enabled drivers build config 00:01:36.008 dma/dpaa2: not in enabled drivers build config 00:01:36.008 dma/hisilicon: not in enabled drivers build config 00:01:36.008 dma/idxd: not in enabled drivers build config 00:01:36.008 dma/ioat: not in enabled drivers build config 00:01:36.008 dma/skeleton: not in enabled drivers build config 00:01:36.008 net/af_packet: not in enabled drivers build config 00:01:36.008 net/af_xdp: not in enabled drivers build config 00:01:36.008 net/ark: not in enabled drivers build config 00:01:36.008 net/atlantic: not in enabled drivers build config 00:01:36.008 net/avp: not in enabled drivers build config 00:01:36.008 net/axgbe: not in enabled drivers build config 00:01:36.008 net/bnx2x: not in enabled drivers build config 00:01:36.008 net/bnxt: not in enabled drivers build config 00:01:36.008 net/bonding: not in enabled drivers build config 00:01:36.008 net/cnxk: not in enabled drivers build config 00:01:36.008 net/cpfl: not in enabled drivers build config 00:01:36.008 net/cxgbe: not in enabled drivers build config 00:01:36.008 net/dpaa: not in enabled drivers build config 00:01:36.008 net/dpaa2: not in enabled drivers build config 00:01:36.008 net/e1000: not in enabled drivers build config 00:01:36.008 net/ena: not in enabled drivers build config 00:01:36.008 net/enetc: not in enabled drivers build config 00:01:36.008 net/enetfec: not in enabled drivers build config 00:01:36.008 net/enic: not in enabled drivers build config 00:01:36.008 net/failsafe: not in enabled drivers build config 00:01:36.008 net/fm10k: not in enabled drivers build config 00:01:36.008 net/gve: not in enabled drivers build config 00:01:36.008 net/hinic: not in enabled drivers build config 00:01:36.008 net/hns3: not in enabled drivers build config 00:01:36.008 net/i40e: not in enabled drivers build config 00:01:36.008 net/iavf: not in enabled drivers build config 00:01:36.008 net/ice: not in enabled drivers build config 00:01:36.008 net/idpf: not in enabled drivers build config 00:01:36.008 net/igc: not in enabled drivers build config 00:01:36.008 net/ionic: not in enabled drivers build config 00:01:36.008 net/ipn3ke: not in enabled drivers build config 00:01:36.008 net/ixgbe: not in enabled drivers build config 00:01:36.008 net/mana: not in enabled drivers build config 00:01:36.008 net/memif: not in enabled drivers build config 00:01:36.008 net/mlx4: not in enabled drivers build config 00:01:36.008 net/mlx5: not in enabled drivers build config 00:01:36.008 net/mvneta: not in enabled drivers build config 00:01:36.008 net/mvpp2: not in enabled drivers build config 00:01:36.008 net/netvsc: not in enabled drivers build config 00:01:36.008 net/nfb: not in enabled drivers build config 00:01:36.008 net/nfp: not in enabled drivers build config 00:01:36.008 net/ngbe: not in enabled drivers build config 00:01:36.008 net/null: not in enabled drivers build config 00:01:36.008 net/octeontx: not in enabled drivers build config 00:01:36.008 net/octeon_ep: not in enabled drivers build config 00:01:36.008 net/pcap: not in enabled drivers build config 00:01:36.008 net/pfe: not in enabled drivers build config 00:01:36.008 net/qede: not in enabled drivers build config 00:01:36.008 net/ring: not in enabled drivers build config 00:01:36.008 net/sfc: not in enabled drivers build config 00:01:36.008 net/softnic: not in enabled drivers build config 00:01:36.009 net/tap: not in enabled drivers build config 00:01:36.009 net/thunderx: not in enabled drivers build config 00:01:36.009 net/txgbe: not in enabled drivers build config 00:01:36.009 net/vdev_netvsc: not in enabled drivers build config 00:01:36.009 net/vhost: not in enabled drivers build config 00:01:36.009 net/virtio: not in enabled drivers build config 00:01:36.009 net/vmxnet3: not in enabled drivers build config 00:01:36.009 raw/*: missing internal dependency, "rawdev" 00:01:36.009 crypto/armv8: not in enabled drivers build config 00:01:36.009 crypto/bcmfs: not in enabled drivers build config 00:01:36.009 crypto/caam_jr: not in enabled drivers build config 00:01:36.009 crypto/ccp: not in enabled drivers build config 00:01:36.009 crypto/cnxk: not in enabled drivers build config 00:01:36.009 crypto/dpaa_sec: not in enabled drivers build config 00:01:36.009 crypto/dpaa2_sec: not in enabled drivers build config 00:01:36.009 crypto/ipsec_mb: not in enabled drivers build config 00:01:36.009 crypto/mlx5: not in enabled drivers build config 00:01:36.009 crypto/mvsam: not in enabled drivers build config 00:01:36.009 crypto/nitrox: not in enabled drivers build config 00:01:36.009 crypto/null: not in enabled drivers build config 00:01:36.009 crypto/octeontx: not in enabled drivers build config 00:01:36.009 crypto/openssl: not in enabled drivers build config 00:01:36.009 crypto/scheduler: not in enabled drivers build config 00:01:36.009 crypto/uadk: not in enabled drivers build config 00:01:36.009 crypto/virtio: not in enabled drivers build config 00:01:36.009 compress/isal: not in enabled drivers build config 00:01:36.009 compress/mlx5: not in enabled drivers build config 00:01:36.009 compress/octeontx: not in enabled drivers build config 00:01:36.009 compress/zlib: not in enabled drivers build config 00:01:36.009 regex/*: missing internal dependency, "regexdev" 00:01:36.009 ml/*: missing internal dependency, "mldev" 00:01:36.009 vdpa/ifc: not in enabled drivers build config 00:01:36.009 vdpa/mlx5: not in enabled drivers build config 00:01:36.009 vdpa/nfp: not in enabled drivers build config 00:01:36.009 vdpa/sfc: not in enabled drivers build config 00:01:36.009 event/*: missing internal dependency, "eventdev" 00:01:36.009 baseband/*: missing internal dependency, "bbdev" 00:01:36.009 gpu/*: missing internal dependency, "gpudev" 00:01:36.009 00:01:36.009 00:01:36.578 Build targets in project: 85 00:01:36.578 00:01:36.578 DPDK 23.11.0 00:01:36.578 00:01:36.578 User defined options 00:01:36.578 buildtype : debug 00:01:36.578 default_library : static 00:01:36.578 libdir : lib 00:01:36.578 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:36.578 b_sanitize : address 00:01:36.578 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:01:36.578 c_link_args : 00:01:36.578 cpu_instruction_set: native 00:01:36.578 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:36.578 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:36.578 enable_docs : false 00:01:36.578 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:36.578 enable_kmods : false 00:01:36.578 tests : false 00:01:36.578 00:01:36.578 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.578 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:01:37.146 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:37.146 [1/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.146 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.146 [3/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.146 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.146 [5/264] Linking static target lib/librte_kvargs.a 00:01:37.146 [6/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.146 [7/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.146 [8/264] Linking static target lib/librte_log.a 00:01:37.146 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.146 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.146 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.146 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.405 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:37.405 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.405 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.405 [16/264] Linking static target lib/librte_telemetry.a 00:01:37.405 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.405 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.405 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:37.405 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.405 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.665 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:37.665 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.665 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.665 [25/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.665 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:37.665 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:37.665 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.665 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.665 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.665 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.665 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.665 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:37.924 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.924 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.924 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.924 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:37.924 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:37.924 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.924 [40/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.924 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.924 [42/264] Linking target lib/librte_log.so.24.0 00:01:37.924 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.924 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.183 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.183 [46/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.183 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.183 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.183 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.183 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.183 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.183 [52/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.183 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.183 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.183 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.183 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.442 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.442 [58/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.443 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.443 [60/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.443 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.443 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.443 [63/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:38.443 [64/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:38.443 [65/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:38.443 [66/264] Linking target lib/librte_kvargs.so.24.0 00:01:38.443 [67/264] Linking target lib/librte_telemetry.so.24.0 00:01:38.443 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.443 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.443 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:38.702 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.702 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:38.702 [73/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.702 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.702 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.702 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:38.702 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.702 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:38.702 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:38.702 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.961 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.961 [82/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.961 [83/264] Linking static target lib/librte_ring.a 00:01:38.961 [84/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:38.961 [85/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:38.961 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:38.961 [87/264] Linking static target lib/librte_eal.a 00:01:38.961 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:38.961 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:38.961 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.961 [91/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.961 [92/264] Linking static target lib/librte_rcu.a 00:01:39.219 [93/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:39.219 [94/264] Linking static target lib/librte_mempool.a 00:01:39.219 [95/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.219 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.219 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.219 [98/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:39.219 [99/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:39.478 [100/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.478 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.478 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.478 [103/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.478 [104/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.478 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.737 [106/264] Linking static target lib/librte_net.a 00:01:39.737 [107/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.737 [108/264] Linking static target lib/librte_meter.a 00:01:39.737 [109/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.737 [110/264] Linking static target lib/librte_mbuf.a 00:01:39.737 [111/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.737 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.737 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.737 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.996 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.996 [116/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.996 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.996 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.996 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.996 [120/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.256 [121/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.256 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.256 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.515 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.515 [125/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.515 [126/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:40.515 [127/264] Linking static target lib/librte_pci.a 00:01:40.515 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.515 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:40.515 [130/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.515 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:40.515 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.515 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:40.515 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:40.515 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:40.515 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:40.515 [137/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.515 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:40.515 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:40.515 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:40.775 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:40.775 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.775 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.775 [144/264] Linking static target lib/librte_cmdline.a 00:01:40.775 [145/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:41.034 [146/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:41.034 [147/264] Linking static target lib/librte_timer.a 00:01:41.034 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:41.034 [149/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.034 [150/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.034 [151/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:41.034 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:41.293 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:41.293 [154/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:41.293 [155/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.293 [156/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:41.293 [157/264] Linking static target lib/librte_compressdev.a 00:01:41.293 [158/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:41.293 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:41.293 [160/264] Linking static target lib/librte_dmadev.a 00:01:41.293 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:41.552 [162/264] Linking static target lib/librte_hash.a 00:01:41.552 [163/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.552 [164/264] Linking static target lib/librte_ethdev.a 00:01:41.552 [165/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.552 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:41.552 [167/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:41.552 [168/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:41.552 [169/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:41.811 [170/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:41.811 [171/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:41.811 [172/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.069 [173/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.069 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:42.069 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:42.069 [176/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.069 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:42.069 [178/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.069 [179/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:42.069 [180/264] Linking static target lib/librte_power.a 00:01:42.069 [181/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:42.069 [182/264] Linking static target lib/librte_cryptodev.a 00:01:42.328 [183/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.328 [184/264] Linking static target lib/librte_reorder.a 00:01:42.328 [185/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.328 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:42.587 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:42.587 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:42.587 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.587 [190/264] Linking static target lib/librte_security.a 00:01:42.846 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.846 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:43.105 [193/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.105 [194/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:43.105 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:43.105 [196/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:43.105 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:43.105 [198/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.365 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:43.365 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:43.365 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:43.365 [202/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:43.365 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:43.365 [204/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:43.365 [205/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:43.624 [206/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:43.624 [207/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:43.624 [208/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:43.624 [209/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:43.624 [210/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.624 [211/264] Linking static target drivers/librte_bus_vdev.a 00:01:43.624 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:43.624 [213/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.624 [214/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:43.624 [215/264] Linking static target drivers/librte_bus_pci.a 00:01:43.883 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:43.883 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:44.142 [218/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:44.142 [219/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.142 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:44.142 [221/264] Linking static target drivers/librte_mempool_ring.a 00:01:44.142 [222/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.709 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.968 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:47.518 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.518 [226/264] Linking target lib/librte_eal.so.24.0 00:01:48.087 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:48.087 [228/264] Linking target lib/librte_ring.so.24.0 00:01:48.087 [229/264] Linking target lib/librte_pci.so.24.0 00:01:48.087 [230/264] Linking target lib/librte_timer.so.24.0 00:01:48.087 [231/264] Linking target lib/librte_dmadev.so.24.0 00:01:48.087 [232/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:48.087 [233/264] Linking target lib/librte_meter.so.24.0 00:01:48.346 [234/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:48.346 [235/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:48.346 [236/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:48.346 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:48.346 [238/264] Linking static target lib/librte_vhost.a 00:01:48.346 [239/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:48.346 [240/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:48.604 [241/264] Linking target lib/librte_mempool.so.24.0 00:01:48.604 [242/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:48.604 [243/264] Linking target lib/librte_rcu.so.24.0 00:01:48.862 [244/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:48.862 [245/264] Linking target lib/librte_mbuf.so.24.0 00:01:48.862 [246/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.862 [247/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:49.121 [248/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:49.417 [249/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:49.417 [250/264] Linking target lib/librte_net.so.24.0 00:01:49.417 [251/264] Linking target lib/librte_compressdev.so.24.0 00:01:49.417 [252/264] Linking target lib/librte_reorder.so.24.0 00:01:49.417 [253/264] Linking target lib/librte_cryptodev.so.24.0 00:01:49.676 [254/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:49.934 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:49.934 [256/264] Linking target lib/librte_cmdline.so.24.0 00:01:49.934 [257/264] Linking target lib/librte_hash.so.24.0 00:01:49.934 [258/264] Linking target lib/librte_security.so.24.0 00:01:49.934 [259/264] Linking target lib/librte_ethdev.so.24.0 00:01:50.192 [260/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:50.451 [261/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:50.451 [262/264] Linking target lib/librte_power.so.24.0 00:01:51.028 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.028 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:51.028 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:01:52.931 CC lib/log/log.o 00:01:52.931 CC lib/ut_mock/mock.o 00:01:52.931 CC lib/ut/ut.o 00:01:52.931 CC lib/log/log_flags.o 00:01:52.931 CC lib/log/log_deprecated.o 00:01:52.931 LIB libspdk_ut_mock.a 00:01:52.931 LIB libspdk_ut.a 00:01:52.931 LIB libspdk_log.a 00:01:52.931 CC lib/util/base64.o 00:01:52.931 CXX lib/trace_parser/trace.o 00:01:52.931 CC lib/util/bit_array.o 00:01:52.931 CC lib/dma/dma.o 00:01:52.931 CC lib/ioat/ioat.o 00:01:52.931 CC lib/util/cpuset.o 00:01:52.931 CC lib/util/crc16.o 00:01:52.931 CC lib/util/crc32.o 00:01:52.931 CC lib/util/crc32c.o 00:01:52.931 CC lib/vfio_user/host/vfio_user_pci.o 00:01:53.189 CC lib/util/crc32_ieee.o 00:01:53.189 LIB libspdk_dma.a 00:01:53.189 CC lib/util/crc64.o 00:01:53.189 CC lib/util/dif.o 00:01:53.189 CC lib/vfio_user/host/vfio_user.o 00:01:53.189 CC lib/util/fd.o 00:01:53.189 CC lib/util/file.o 00:01:53.189 CC lib/util/hexlify.o 00:01:53.189 LIB libspdk_ioat.a 00:01:53.189 CC lib/util/iov.o 00:01:53.189 CC lib/util/math.o 00:01:53.189 LIB libspdk_vfio_user.a 00:01:53.189 CC lib/util/pipe.o 00:01:53.189 CC lib/util/strerror_tls.o 00:01:53.189 CC lib/util/string.o 00:01:53.189 CC lib/util/uuid.o 00:01:53.189 CC lib/util/fd_group.o 00:01:53.189 CC lib/util/xor.o 00:01:53.459 CC lib/util/zipf.o 00:01:53.459 LIB libspdk_util.a 00:01:53.459 CC lib/rdma/common.o 00:01:53.459 CC lib/idxd/idxd.o 00:01:53.459 CC lib/rdma/rdma_verbs.o 00:01:53.459 CC lib/conf/conf.o 00:01:53.459 CC lib/vmd/vmd.o 00:01:53.459 CC lib/idxd/idxd_user.o 00:01:53.459 CC lib/json/json_parse.o 00:01:53.459 CC lib/env_dpdk/env.o 00:01:53.459 LIB libspdk_trace_parser.a 00:01:53.459 CC lib/vmd/led.o 00:01:53.718 CC lib/env_dpdk/memory.o 00:01:53.718 CC lib/env_dpdk/pci.o 00:01:53.718 CC lib/env_dpdk/init.o 00:01:53.718 LIB libspdk_conf.a 00:01:53.718 CC lib/json/json_util.o 00:01:53.718 CC lib/env_dpdk/threads.o 00:01:53.718 CC lib/env_dpdk/pci_ioat.o 00:01:53.718 LIB libspdk_rdma.a 00:01:53.718 CC lib/json/json_write.o 00:01:53.718 CC lib/env_dpdk/pci_virtio.o 00:01:53.718 CC lib/env_dpdk/pci_vmd.o 00:01:53.977 LIB libspdk_idxd.a 00:01:53.977 CC lib/env_dpdk/pci_idxd.o 00:01:53.977 LIB libspdk_vmd.a 00:01:53.977 CC lib/env_dpdk/pci_event.o 00:01:53.977 CC lib/env_dpdk/sigbus_handler.o 00:01:53.977 CC lib/env_dpdk/pci_dpdk.o 00:01:53.977 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:53.977 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:53.977 LIB libspdk_json.a 00:01:54.235 CC lib/jsonrpc/jsonrpc_server.o 00:01:54.235 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:54.235 CC lib/jsonrpc/jsonrpc_client.o 00:01:54.235 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:54.235 LIB libspdk_env_dpdk.a 00:01:54.494 LIB libspdk_jsonrpc.a 00:01:54.494 CC lib/rpc/rpc.o 00:01:54.753 LIB libspdk_rpc.a 00:01:55.012 CC lib/trace/trace.o 00:01:55.012 CC lib/sock/sock.o 00:01:55.012 CC lib/trace/trace_flags.o 00:01:55.012 CC lib/notify/notify.o 00:01:55.012 CC lib/sock/sock_rpc.o 00:01:55.012 CC lib/trace/trace_rpc.o 00:01:55.012 CC lib/notify/notify_rpc.o 00:01:55.012 LIB libspdk_notify.a 00:01:55.271 LIB libspdk_trace.a 00:01:55.271 LIB libspdk_sock.a 00:01:55.530 CC lib/thread/thread.o 00:01:55.530 CC lib/thread/iobuf.o 00:01:55.530 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:55.530 CC lib/nvme/nvme_ctrlr.o 00:01:55.530 CC lib/nvme/nvme_fabric.o 00:01:55.530 CC lib/nvme/nvme_ns_cmd.o 00:01:55.530 CC lib/nvme/nvme_ns.o 00:01:55.530 CC lib/nvme/nvme_pcie_common.o 00:01:55.530 CC lib/nvme/nvme_pcie.o 00:01:55.530 CC lib/nvme/nvme_qpair.o 00:01:55.530 CC lib/nvme/nvme.o 00:01:55.788 CC lib/nvme/nvme_quirks.o 00:01:55.788 CC lib/nvme/nvme_transport.o 00:01:55.788 CC lib/nvme/nvme_discovery.o 00:01:55.788 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:56.047 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:56.047 CC lib/nvme/nvme_tcp.o 00:01:56.047 CC lib/nvme/nvme_opal.o 00:01:56.047 CC lib/nvme/nvme_io_msg.o 00:01:56.047 LIB libspdk_thread.a 00:01:56.047 CC lib/nvme/nvme_poll_group.o 00:01:56.047 CC lib/accel/accel.o 00:01:56.048 CC lib/nvme/nvme_zns.o 00:01:56.307 CC lib/nvme/nvme_cuse.o 00:01:56.307 CC lib/nvme/nvme_vfio_user.o 00:01:56.307 CC lib/accel/accel_rpc.o 00:01:56.307 CC lib/accel/accel_sw.o 00:01:56.307 CC lib/nvme/nvme_rdma.o 00:01:56.566 CC lib/blob/blobstore.o 00:01:56.567 CC lib/blob/request.o 00:01:56.567 CC lib/init/json_config.o 00:01:56.567 CC lib/init/subsystem.o 00:01:56.567 CC lib/virtio/virtio.o 00:01:56.567 CC lib/blob/zeroes.o 00:01:56.567 LIB libspdk_accel.a 00:01:56.567 CC lib/virtio/virtio_vhost_user.o 00:01:56.567 CC lib/init/subsystem_rpc.o 00:01:56.567 CC lib/init/rpc.o 00:01:56.567 CC lib/blob/blob_bs_dev.o 00:01:56.567 CC lib/virtio/virtio_vfio_user.o 00:01:56.826 CC lib/virtio/virtio_pci.o 00:01:56.826 CC lib/bdev/bdev.o 00:01:56.826 CC lib/bdev/bdev_rpc.o 00:01:56.826 CC lib/bdev/bdev_zone.o 00:01:56.826 LIB libspdk_init.a 00:01:56.826 CC lib/bdev/part.o 00:01:56.826 CC lib/bdev/scsi_nvme.o 00:01:56.826 CC lib/event/app.o 00:01:56.826 CC lib/event/reactor.o 00:01:56.826 CC lib/event/log_rpc.o 00:01:56.826 LIB libspdk_virtio.a 00:01:56.826 CC lib/event/app_rpc.o 00:01:56.826 CC lib/event/scheduler_static.o 00:01:57.141 LIB libspdk_nvme.a 00:01:57.141 LIB libspdk_event.a 00:01:57.709 LIB libspdk_blob.a 00:01:57.968 LIB libspdk_bdev.a 00:01:57.968 CC lib/blobfs/blobfs.o 00:01:57.968 CC lib/lvol/lvol.o 00:01:57.968 CC lib/blobfs/tree.o 00:01:58.226 CC lib/nbd/nbd.o 00:01:58.226 CC lib/scsi/dev.o 00:01:58.226 CC lib/nbd/nbd_rpc.o 00:01:58.226 CC lib/scsi/lun.o 00:01:58.226 CC lib/ftl/ftl_core.o 00:01:58.226 CC lib/scsi/port.o 00:01:58.226 CC lib/nvmf/ctrlr.o 00:01:58.226 CC lib/scsi/scsi.o 00:01:58.226 CC lib/scsi/scsi_bdev.o 00:01:58.226 CC lib/nvmf/ctrlr_discovery.o 00:01:58.226 CC lib/scsi/scsi_pr.o 00:01:58.226 CC lib/scsi/scsi_rpc.o 00:01:58.226 CC lib/scsi/task.o 00:01:58.485 LIB libspdk_nbd.a 00:01:58.485 CC lib/ftl/ftl_init.o 00:01:58.485 CC lib/ftl/ftl_layout.o 00:01:58.485 CC lib/nvmf/ctrlr_bdev.o 00:01:58.485 LIB libspdk_blobfs.a 00:01:58.485 LIB libspdk_lvol.a 00:01:58.485 CC lib/ftl/ftl_debug.o 00:01:58.485 CC lib/ftl/ftl_io.o 00:01:58.485 CC lib/nvmf/subsystem.o 00:01:58.485 CC lib/nvmf/nvmf.o 00:01:58.485 CC lib/nvmf/nvmf_rpc.o 00:01:58.485 CC lib/ftl/ftl_sb.o 00:01:58.485 LIB libspdk_scsi.a 00:01:58.485 CC lib/ftl/ftl_l2p.o 00:01:58.485 CC lib/nvmf/transport.o 00:01:58.744 CC lib/ftl/ftl_l2p_flat.o 00:01:58.744 CC lib/ftl/ftl_nv_cache.o 00:01:58.744 CC lib/ftl/ftl_band.o 00:01:58.744 CC lib/nvmf/tcp.o 00:01:58.744 CC lib/ftl/ftl_band_ops.o 00:01:58.744 CC lib/ftl/ftl_writer.o 00:01:58.744 CC lib/nvmf/rdma.o 00:01:59.003 CC lib/ftl/ftl_rq.o 00:01:59.003 CC lib/ftl/ftl_reloc.o 00:01:59.003 CC lib/ftl/ftl_l2p_cache.o 00:01:59.003 CC lib/ftl/ftl_p2l.o 00:01:59.003 CC lib/iscsi/conn.o 00:01:59.003 CC lib/iscsi/init_grp.o 00:01:59.003 CC lib/vhost/vhost.o 00:01:59.003 CC lib/ftl/mngt/ftl_mngt.o 00:01:59.003 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:59.003 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:59.314 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:59.314 CC lib/iscsi/iscsi.o 00:01:59.314 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:59.314 CC lib/iscsi/md5.o 00:01:59.314 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:59.314 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:59.314 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:59.314 CC lib/iscsi/param.o 00:01:59.314 CC lib/iscsi/portal_grp.o 00:01:59.314 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:59.314 CC lib/iscsi/tgt_node.o 00:01:59.314 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:59.314 CC lib/vhost/vhost_rpc.o 00:01:59.314 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:59.575 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:59.575 CC lib/iscsi/iscsi_subsystem.o 00:01:59.575 CC lib/iscsi/iscsi_rpc.o 00:01:59.575 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:59.575 CC lib/ftl/utils/ftl_conf.o 00:01:59.575 CC lib/ftl/utils/ftl_md.o 00:01:59.575 CC lib/iscsi/task.o 00:01:59.575 CC lib/ftl/utils/ftl_mempool.o 00:01:59.834 CC lib/ftl/utils/ftl_bitmap.o 00:01:59.834 LIB libspdk_nvmf.a 00:01:59.834 CC lib/ftl/utils/ftl_property.o 00:01:59.834 CC lib/vhost/vhost_scsi.o 00:01:59.834 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:59.834 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:59.834 CC lib/vhost/vhost_blk.o 00:01:59.834 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:59.834 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:59.834 CC lib/vhost/rte_vhost_user.o 00:01:59.834 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:59.834 LIB libspdk_iscsi.a 00:01:59.834 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:59.834 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:59.834 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:59.834 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:59.834 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:59.834 CC lib/ftl/base/ftl_base_dev.o 00:02:00.093 CC lib/ftl/base/ftl_base_bdev.o 00:02:00.093 CC lib/ftl/ftl_trace.o 00:02:00.093 LIB libspdk_ftl.a 00:02:00.387 LIB libspdk_vhost.a 00:02:00.645 CC module/env_dpdk/env_dpdk_rpc.o 00:02:00.645 CC module/accel/iaa/accel_iaa.o 00:02:00.645 CC module/accel/dsa/accel_dsa.o 00:02:00.645 CC module/scheduler/gscheduler/gscheduler.o 00:02:00.645 CC module/sock/posix/posix.o 00:02:00.645 CC module/accel/ioat/accel_ioat.o 00:02:00.645 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:00.645 CC module/blob/bdev/blob_bdev.o 00:02:00.645 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:00.645 CC module/accel/error/accel_error.o 00:02:00.904 LIB libspdk_env_dpdk_rpc.a 00:02:00.904 CC module/accel/error/accel_error_rpc.o 00:02:00.904 LIB libspdk_scheduler_dpdk_governor.a 00:02:00.904 CC module/accel/ioat/accel_ioat_rpc.o 00:02:00.904 LIB libspdk_scheduler_dynamic.a 00:02:00.904 CC module/accel/dsa/accel_dsa_rpc.o 00:02:00.904 CC module/accel/iaa/accel_iaa_rpc.o 00:02:00.904 LIB libspdk_scheduler_gscheduler.a 00:02:00.904 LIB libspdk_blob_bdev.a 00:02:00.904 LIB libspdk_accel_error.a 00:02:00.904 LIB libspdk_accel_ioat.a 00:02:00.904 LIB libspdk_accel_dsa.a 00:02:00.904 LIB libspdk_accel_iaa.a 00:02:00.904 CC module/bdev/delay/vbdev_delay.o 00:02:00.904 CC module/bdev/lvol/vbdev_lvol.o 00:02:00.904 CC module/bdev/gpt/gpt.o 00:02:00.904 CC module/blobfs/bdev/blobfs_bdev.o 00:02:00.904 CC module/bdev/error/vbdev_error.o 00:02:00.904 CC module/bdev/malloc/bdev_malloc.o 00:02:00.904 CC module/bdev/null/bdev_null.o 00:02:01.163 CC module/bdev/nvme/bdev_nvme.o 00:02:01.163 CC module/bdev/passthru/vbdev_passthru.o 00:02:01.163 CC module/bdev/gpt/vbdev_gpt.o 00:02:01.163 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:01.163 LIB libspdk_sock_posix.a 00:02:01.163 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:01.163 CC module/bdev/error/vbdev_error_rpc.o 00:02:01.163 CC module/bdev/null/bdev_null_rpc.o 00:02:01.163 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:01.163 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:01.163 LIB libspdk_blobfs_bdev.a 00:02:01.163 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:01.163 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:01.163 LIB libspdk_bdev_passthru.a 00:02:01.421 LIB libspdk_bdev_gpt.a 00:02:01.421 LIB libspdk_bdev_error.a 00:02:01.421 LIB libspdk_bdev_delay.a 00:02:01.421 LIB libspdk_bdev_null.a 00:02:01.421 LIB libspdk_bdev_malloc.a 00:02:01.421 CC module/bdev/raid/bdev_raid.o 00:02:01.421 CC module/bdev/split/vbdev_split.o 00:02:01.421 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:01.421 CC module/bdev/ftl/bdev_ftl.o 00:02:01.421 CC module/bdev/aio/bdev_aio.o 00:02:01.421 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:01.421 LIB libspdk_bdev_lvol.a 00:02:01.421 CC module/bdev/daos/bdev_daos.o 00:02:01.421 CC module/bdev/daos/bdev_daos_rpc.o 00:02:01.421 CC module/bdev/split/vbdev_split_rpc.o 00:02:01.681 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:01.681 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:01.681 CC module/bdev/aio/bdev_aio_rpc.o 00:02:01.681 LIB libspdk_bdev_split.a 00:02:01.681 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:01.681 CC module/bdev/nvme/nvme_rpc.o 00:02:01.681 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:01.681 LIB libspdk_bdev_daos.a 00:02:01.681 LIB libspdk_bdev_zone_block.a 00:02:01.681 LIB libspdk_bdev_aio.a 00:02:01.681 CC module/bdev/nvme/bdev_mdns_client.o 00:02:01.681 CC module/bdev/raid/bdev_raid_rpc.o 00:02:01.681 CC module/bdev/raid/bdev_raid_sb.o 00:02:01.681 LIB libspdk_bdev_ftl.a 00:02:01.681 CC module/bdev/raid/raid0.o 00:02:01.681 CC module/bdev/nvme/vbdev_opal.o 00:02:01.681 CC module/bdev/raid/raid1.o 00:02:01.940 LIB libspdk_bdev_virtio.a 00:02:01.940 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:01.940 CC module/bdev/raid/concat.o 00:02:01.940 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:01.940 LIB libspdk_bdev_raid.a 00:02:01.940 LIB libspdk_bdev_nvme.a 00:02:02.510 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:02.510 CC module/event/subsystems/iobuf/iobuf.o 00:02:02.510 CC module/event/subsystems/sock/sock.o 00:02:02.510 CC module/event/subsystems/scheduler/scheduler.o 00:02:02.510 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:02.510 CC module/event/subsystems/vmd/vmd.o 00:02:02.510 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:02.510 LIB libspdk_event_vhost_blk.a 00:02:02.510 LIB libspdk_event_sock.a 00:02:02.510 LIB libspdk_event_iobuf.a 00:02:02.510 LIB libspdk_event_scheduler.a 00:02:02.510 LIB libspdk_event_vmd.a 00:02:02.769 CC module/event/subsystems/accel/accel.o 00:02:02.769 LIB libspdk_event_accel.a 00:02:03.029 CC module/event/subsystems/bdev/bdev.o 00:02:03.288 LIB libspdk_event_bdev.a 00:02:03.547 CC module/event/subsystems/nbd/nbd.o 00:02:03.547 CC module/event/subsystems/scsi/scsi.o 00:02:03.547 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:03.547 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:03.547 LIB libspdk_event_nbd.a 00:02:03.547 LIB libspdk_event_scsi.a 00:02:03.806 LIB libspdk_event_nvmf.a 00:02:03.806 CC module/event/subsystems/iscsi/iscsi.o 00:02:03.806 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:03.806 LIB libspdk_event_vhost_scsi.a 00:02:03.806 LIB libspdk_event_iscsi.a 00:02:04.065 CC app/spdk_nvme_perf/perf.o 00:02:04.065 CXX app/trace/trace.o 00:02:04.065 CC app/trace_record/trace_record.o 00:02:04.065 CC app/spdk_lspci/spdk_lspci.o 00:02:04.065 CC app/nvmf_tgt/nvmf_main.o 00:02:04.065 CC app/iscsi_tgt/iscsi_tgt.o 00:02:04.065 CC examples/accel/perf/accel_perf.o 00:02:04.065 CC app/spdk_tgt/spdk_tgt.o 00:02:04.065 CC test/accel/dif/dif.o 00:02:04.065 LINK spdk_lspci 00:02:04.065 CC test/app/bdev_svc/bdev_svc.o 00:02:04.324 LINK spdk_trace_record 00:02:04.324 LINK nvmf_tgt 00:02:04.324 LINK spdk_tgt 00:02:04.324 LINK iscsi_tgt 00:02:04.324 LINK bdev_svc 00:02:04.324 LINK accel_perf 00:02:04.324 LINK dif 00:02:04.324 LINK spdk_trace 00:02:04.584 LINK spdk_nvme_perf 00:02:04.584 CC app/spdk_nvme_identify/identify.o 00:02:04.584 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:04.843 CC app/spdk_nvme_discover/discovery_aer.o 00:02:04.843 LINK nvme_fuzz 00:02:04.843 LINK spdk_nvme_discover 00:02:05.103 CC examples/bdev/hello_world/hello_bdev.o 00:02:05.103 LINK spdk_nvme_identify 00:02:05.103 CC app/spdk_top/spdk_top.o 00:02:05.103 LINK hello_bdev 00:02:05.671 LINK spdk_top 00:02:05.671 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:05.671 CC app/vhost/vhost.o 00:02:05.672 CC examples/bdev/bdevperf/bdevperf.o 00:02:05.931 LINK vhost 00:02:05.931 CC app/spdk_dd/spdk_dd.o 00:02:05.931 CC test/app/histogram_perf/histogram_perf.o 00:02:06.190 CC test/bdev/bdevio/bdevio.o 00:02:06.190 LINK bdevperf 00:02:06.190 CC app/fio/nvme/fio_plugin.o 00:02:06.190 LINK histogram_perf 00:02:06.190 CC app/fio/bdev/fio_plugin.o 00:02:06.190 LINK spdk_dd 00:02:06.190 CC test/app/jsoncat/jsoncat.o 00:02:06.450 LINK bdevio 00:02:06.450 LINK jsoncat 00:02:06.450 LINK iscsi_fuzz 00:02:06.450 LINK spdk_bdev 00:02:06.450 LINK spdk_nvme 00:02:06.709 CC test/app/stub/stub.o 00:02:06.709 LINK stub 00:02:06.969 TEST_HEADER include/spdk/rpc.h 00:02:06.969 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:06.969 TEST_HEADER include/spdk/accel_module.h 00:02:06.969 TEST_HEADER include/spdk/bit_pool.h 00:02:06.969 TEST_HEADER include/spdk/ioat.h 00:02:06.969 TEST_HEADER include/spdk/blobfs.h 00:02:06.969 TEST_HEADER include/spdk/pipe.h 00:02:06.969 TEST_HEADER include/spdk/accel.h 00:02:06.969 TEST_HEADER include/spdk/version.h 00:02:06.969 TEST_HEADER include/spdk/trace_parser.h 00:02:06.969 TEST_HEADER include/spdk/opal_spec.h 00:02:06.969 TEST_HEADER include/spdk/uuid.h 00:02:06.969 TEST_HEADER include/spdk/bdev.h 00:02:06.969 TEST_HEADER include/spdk/hexlify.h 00:02:06.969 TEST_HEADER include/spdk/likely.h 00:02:06.969 CC test/blobfs/mkfs/mkfs.o 00:02:06.969 TEST_HEADER include/spdk/vhost.h 00:02:06.969 TEST_HEADER include/spdk/memory.h 00:02:06.969 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:06.969 TEST_HEADER include/spdk/dma.h 00:02:06.969 TEST_HEADER include/spdk/nbd.h 00:02:06.969 TEST_HEADER include/spdk/env.h 00:02:06.969 TEST_HEADER include/spdk/nvme_zns.h 00:02:06.969 TEST_HEADER include/spdk/env_dpdk.h 00:02:06.969 TEST_HEADER include/spdk/init.h 00:02:06.969 TEST_HEADER include/spdk/fd_group.h 00:02:06.969 TEST_HEADER include/spdk/bdev_module.h 00:02:06.969 TEST_HEADER include/spdk/opal.h 00:02:06.969 TEST_HEADER include/spdk/event.h 00:02:06.969 TEST_HEADER include/spdk/base64.h 00:02:06.969 TEST_HEADER include/spdk/nvmf.h 00:02:06.969 TEST_HEADER include/spdk/nvmf_spec.h 00:02:06.969 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:06.969 TEST_HEADER include/spdk/fd.h 00:02:06.969 TEST_HEADER include/spdk/barrier.h 00:02:06.969 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:06.969 TEST_HEADER include/spdk/zipf.h 00:02:06.969 TEST_HEADER include/spdk/scheduler.h 00:02:06.969 TEST_HEADER include/spdk/dif.h 00:02:06.969 TEST_HEADER include/spdk/scsi_spec.h 00:02:06.969 TEST_HEADER include/spdk/blob.h 00:02:06.969 TEST_HEADER include/spdk/cpuset.h 00:02:06.969 TEST_HEADER include/spdk/thread.h 00:02:06.969 TEST_HEADER include/spdk/tree.h 00:02:06.969 TEST_HEADER include/spdk/xor.h 00:02:06.969 TEST_HEADER include/spdk/assert.h 00:02:06.969 TEST_HEADER include/spdk/file.h 00:02:06.969 TEST_HEADER include/spdk/endian.h 00:02:06.969 TEST_HEADER include/spdk/notify.h 00:02:06.969 TEST_HEADER include/spdk/util.h 00:02:06.969 TEST_HEADER include/spdk/log.h 00:02:06.969 TEST_HEADER include/spdk/sock.h 00:02:06.969 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:06.969 TEST_HEADER include/spdk/config.h 00:02:06.969 TEST_HEADER include/spdk/histogram_data.h 00:02:06.969 TEST_HEADER include/spdk/nvme_intel.h 00:02:06.969 TEST_HEADER include/spdk/idxd_spec.h 00:02:06.969 TEST_HEADER include/spdk/crc16.h 00:02:06.969 TEST_HEADER include/spdk/bdev_zone.h 00:02:06.969 TEST_HEADER include/spdk/stdinc.h 00:02:06.969 TEST_HEADER include/spdk/vmd.h 00:02:06.969 TEST_HEADER include/spdk/scsi.h 00:02:06.969 TEST_HEADER include/spdk/jsonrpc.h 00:02:06.969 TEST_HEADER include/spdk/blob_bdev.h 00:02:06.969 TEST_HEADER include/spdk/crc32.h 00:02:06.969 TEST_HEADER include/spdk/nvmf_transport.h 00:02:06.969 TEST_HEADER include/spdk/idxd.h 00:02:06.969 TEST_HEADER include/spdk/crc64.h 00:02:06.969 TEST_HEADER include/spdk/nvme.h 00:02:06.969 TEST_HEADER include/spdk/iscsi_spec.h 00:02:06.969 TEST_HEADER include/spdk/queue.h 00:02:06.969 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:06.969 TEST_HEADER include/spdk/lvol.h 00:02:06.969 TEST_HEADER include/spdk/ftl.h 00:02:06.969 TEST_HEADER include/spdk/trace.h 00:02:06.969 TEST_HEADER include/spdk/ioat_spec.h 00:02:06.969 TEST_HEADER include/spdk/conf.h 00:02:06.969 TEST_HEADER include/spdk/ublk.h 00:02:06.969 TEST_HEADER include/spdk/bit_array.h 00:02:06.969 TEST_HEADER include/spdk/pci_ids.h 00:02:06.969 TEST_HEADER include/spdk/nvme_spec.h 00:02:06.969 LINK mkfs 00:02:06.969 TEST_HEADER include/spdk/string.h 00:02:06.969 TEST_HEADER include/spdk/gpt_spec.h 00:02:06.969 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:06.969 TEST_HEADER include/spdk/json.h 00:02:06.969 TEST_HEADER include/spdk/reduce.h 00:02:06.969 TEST_HEADER include/spdk/mmio.h 00:02:06.969 CXX test/cpp_headers/rpc.o 00:02:06.969 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:07.228 CXX test/cpp_headers/vfio_user_spec.o 00:02:07.228 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:07.228 CXX test/cpp_headers/accel_module.o 00:02:07.228 CC examples/blob/hello_world/hello_blob.o 00:02:07.487 CXX test/cpp_headers/bit_pool.o 00:02:07.487 LINK vhost_fuzz 00:02:07.487 LINK hello_blob 00:02:07.487 CC examples/blob/cli/blobcli.o 00:02:07.487 CXX test/cpp_headers/ioat.o 00:02:07.487 CXX test/cpp_headers/blobfs.o 00:02:07.747 CXX test/cpp_headers/pipe.o 00:02:07.747 CC examples/ioat/perf/perf.o 00:02:07.747 LINK blobcli 00:02:07.747 CXX test/cpp_headers/accel.o 00:02:07.747 CXX test/cpp_headers/version.o 00:02:07.747 LINK ioat_perf 00:02:08.005 CC examples/ioat/verify/verify.o 00:02:08.006 CC examples/nvme/hello_world/hello_world.o 00:02:08.006 CXX test/cpp_headers/trace_parser.o 00:02:08.006 CC test/dma/test_dma/test_dma.o 00:02:08.006 LINK verify 00:02:08.006 LINK hello_world 00:02:08.006 CXX test/cpp_headers/opal_spec.o 00:02:08.006 CC test/env/mem_callbacks/mem_callbacks.o 00:02:08.264 CXX test/cpp_headers/uuid.o 00:02:08.264 CC test/event/event_perf/event_perf.o 00:02:08.264 CXX test/cpp_headers/bdev.o 00:02:08.264 LINK test_dma 00:02:08.264 CXX test/cpp_headers/hexlify.o 00:02:08.264 LINK event_perf 00:02:08.523 CXX test/cpp_headers/likely.o 00:02:08.523 CXX test/cpp_headers/vhost.o 00:02:08.523 LINK mem_callbacks 00:02:08.523 CC examples/sock/hello_world/hello_sock.o 00:02:08.523 CXX test/cpp_headers/memory.o 00:02:08.523 CXX test/cpp_headers/vfio_user_pci.o 00:02:08.783 CXX test/cpp_headers/dma.o 00:02:08.783 CXX test/cpp_headers/nbd.o 00:02:08.783 CC test/lvol/esnap/esnap.o 00:02:08.783 CC test/env/vtophys/vtophys.o 00:02:08.783 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:08.783 LINK hello_sock 00:02:08.783 CC examples/nvme/reconnect/reconnect.o 00:02:08.783 CXX test/cpp_headers/env.o 00:02:08.783 CC test/event/reactor/reactor.o 00:02:08.783 LINK vtophys 00:02:08.783 LINK env_dpdk_post_init 00:02:08.783 LINK reactor 00:02:08.783 CXX test/cpp_headers/nvme_zns.o 00:02:09.040 LINK reconnect 00:02:09.040 CXX test/cpp_headers/env_dpdk.o 00:02:09.040 CXX test/cpp_headers/init.o 00:02:09.040 CXX test/cpp_headers/fd_group.o 00:02:09.299 CXX test/cpp_headers/bdev_module.o 00:02:09.299 CXX test/cpp_headers/opal.o 00:02:09.299 CC test/env/memory/memory_ut.o 00:02:09.299 CC examples/vmd/lsvmd/lsvmd.o 00:02:09.299 CC examples/vmd/led/led.o 00:02:09.559 LINK lsvmd 00:02:09.559 CC test/event/reactor_perf/reactor_perf.o 00:02:09.559 CXX test/cpp_headers/event.o 00:02:09.559 CC test/env/pci/pci_ut.o 00:02:09.559 LINK led 00:02:09.559 CC examples/nvmf/nvmf/nvmf.o 00:02:09.559 LINK reactor_perf 00:02:09.559 CXX test/cpp_headers/base64.o 00:02:09.559 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:09.818 CXX test/cpp_headers/nvmf.o 00:02:09.818 LINK pci_ut 00:02:09.818 LINK nvmf 00:02:09.818 LINK memory_ut 00:02:09.818 CXX test/cpp_headers/nvmf_spec.o 00:02:09.818 CXX test/cpp_headers/blobfs_bdev.o 00:02:10.109 LINK nvme_manage 00:02:10.109 CXX test/cpp_headers/fd.o 00:02:10.109 CC test/nvme/aer/aer.o 00:02:10.109 CXX test/cpp_headers/barrier.o 00:02:10.109 CC test/rpc_client/rpc_client_test.o 00:02:10.109 CC test/event/app_repeat/app_repeat.o 00:02:10.109 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:10.109 CC test/nvme/reset/reset.o 00:02:10.109 CC test/thread/poller_perf/poller_perf.o 00:02:10.368 LINK aer 00:02:10.368 LINK app_repeat 00:02:10.368 LINK rpc_client_test 00:02:10.368 CC examples/util/zipf/zipf.o 00:02:10.368 LINK poller_perf 00:02:10.368 CXX test/cpp_headers/zipf.o 00:02:10.368 LINK reset 00:02:10.368 LINK zipf 00:02:10.368 CXX test/cpp_headers/scheduler.o 00:02:10.627 CXX test/cpp_headers/dif.o 00:02:10.627 CXX test/cpp_headers/scsi_spec.o 00:02:10.627 CC examples/nvme/arbitration/arbitration.o 00:02:10.627 CXX test/cpp_headers/blob.o 00:02:10.886 CXX test/cpp_headers/cpuset.o 00:02:10.886 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:10.886 CC test/thread/lock/spdk_lock.o 00:02:10.886 CXX test/cpp_headers/thread.o 00:02:10.886 LINK arbitration 00:02:10.886 LINK esnap 00:02:10.886 CXX test/cpp_headers/tree.o 00:02:10.886 CC test/nvme/sgl/sgl.o 00:02:10.886 LINK histogram_ut 00:02:10.886 CC test/event/scheduler/scheduler.o 00:02:10.886 CXX test/cpp_headers/xor.o 00:02:10.886 CXX test/cpp_headers/assert.o 00:02:11.145 CC test/nvme/e2edp/nvme_dp.o 00:02:11.145 LINK sgl 00:02:11.145 LINK scheduler 00:02:11.145 CC examples/nvme/hotplug/hotplug.o 00:02:11.145 CXX test/cpp_headers/file.o 00:02:11.145 CC test/unit/lib/accel/accel.c/accel_ut.o 00:02:11.404 LINK nvme_dp 00:02:11.404 CXX test/cpp_headers/endian.o 00:02:11.404 LINK hotplug 00:02:11.404 CC examples/thread/thread/thread_ex.o 00:02:11.404 CXX test/cpp_headers/notify.o 00:02:11.663 CXX test/cpp_headers/util.o 00:02:11.663 LINK spdk_lock 00:02:11.663 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:11.663 LINK thread 00:02:11.663 CXX test/cpp_headers/log.o 00:02:11.663 CC examples/idxd/perf/perf.o 00:02:11.922 LINK cmb_copy 00:02:11.922 CXX test/cpp_headers/sock.o 00:02:11.922 CC test/nvme/overhead/overhead.o 00:02:11.922 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:11.922 LINK idxd_perf 00:02:12.181 CC test/nvme/err_injection/err_injection.o 00:02:12.181 LINK overhead 00:02:12.181 CXX test/cpp_headers/config.o 00:02:12.181 CXX test/cpp_headers/histogram_data.o 00:02:12.181 CC examples/nvme/abort/abort.o 00:02:12.181 CXX test/cpp_headers/nvme_intel.o 00:02:12.181 LINK err_injection 00:02:12.181 CXX test/cpp_headers/idxd_spec.o 00:02:12.181 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:12.181 LINK abort 00:02:12.441 CXX test/cpp_headers/crc16.o 00:02:12.441 LINK pmr_persistence 00:02:12.441 LINK accel_ut 00:02:12.441 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:12.441 CXX test/cpp_headers/bdev_zone.o 00:02:12.441 CXX test/cpp_headers/stdinc.o 00:02:12.441 LINK interrupt_tgt 00:02:12.441 CXX test/cpp_headers/vmd.o 00:02:12.700 CXX test/cpp_headers/scsi.o 00:02:12.700 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:02:12.700 CXX test/cpp_headers/jsonrpc.o 00:02:12.700 CC test/nvme/startup/startup.o 00:02:12.700 CXX test/cpp_headers/blob_bdev.o 00:02:12.700 CXX test/cpp_headers/crc32.o 00:02:12.960 LINK startup 00:02:12.960 CC test/unit/lib/bdev/part.c/part_ut.o 00:02:12.960 CXX test/cpp_headers/nvmf_transport.o 00:02:12.960 CXX test/cpp_headers/idxd.o 00:02:12.960 CXX test/cpp_headers/crc64.o 00:02:12.960 CXX test/cpp_headers/nvme.o 00:02:12.960 CXX test/cpp_headers/iscsi_spec.o 00:02:12.960 CXX test/cpp_headers/queue.o 00:02:12.960 CXX test/cpp_headers/nvmf_cmd.o 00:02:12.960 CXX test/cpp_headers/lvol.o 00:02:13.219 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:02:13.219 CC test/nvme/reserve/reserve.o 00:02:13.219 CXX test/cpp_headers/ftl.o 00:02:13.219 CXX test/cpp_headers/trace.o 00:02:13.219 CXX test/cpp_headers/ioat_spec.o 00:02:13.219 LINK reserve 00:02:13.219 LINK scsi_nvme_ut 00:02:13.219 CXX test/cpp_headers/conf.o 00:02:13.219 CXX test/cpp_headers/ublk.o 00:02:13.219 CXX test/cpp_headers/bit_array.o 00:02:13.478 CXX test/cpp_headers/pci_ids.o 00:02:13.478 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:02:13.478 CXX test/cpp_headers/nvme_spec.o 00:02:13.478 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:02:13.478 CC test/nvme/simple_copy/simple_copy.o 00:02:13.478 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:02:13.478 CC test/nvme/connect_stress/connect_stress.o 00:02:13.478 CXX test/cpp_headers/string.o 00:02:13.478 LINK simple_copy 00:02:13.478 LINK connect_stress 00:02:13.738 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:02:13.738 CXX test/cpp_headers/gpt_spec.o 00:02:13.738 LINK gpt_ut 00:02:13.738 CXX test/cpp_headers/nvme_ocssd.o 00:02:13.738 LINK blob_bdev_ut 00:02:13.997 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:02:13.997 CXX test/cpp_headers/json.o 00:02:13.998 CC test/nvme/boot_partition/boot_partition.o 00:02:13.998 LINK vbdev_lvol_ut 00:02:13.998 CC test/unit/lib/blob/blob.c/blob_ut.o 00:02:13.998 CXX test/cpp_headers/reduce.o 00:02:13.998 LINK boot_partition 00:02:14.257 CXX test/cpp_headers/mmio.o 00:02:14.257 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:02:14.257 CC test/nvme/compliance/nvme_compliance.o 00:02:14.257 CC test/nvme/fused_ordering/fused_ordering.o 00:02:14.257 LINK part_ut 00:02:14.257 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:02:14.517 LINK fused_ordering 00:02:14.517 LINK bdev_zone_ut 00:02:14.517 LINK tree_ut 00:02:14.517 LINK nvme_compliance 00:02:14.517 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:02:14.517 CC test/unit/lib/dma/dma.c/dma_ut.o 00:02:14.776 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:02:14.776 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:14.776 LINK bdev_raid_ut 00:02:14.776 LINK dma_ut 00:02:14.776 LINK doorbell_aers 00:02:15.035 LINK bdev_ut 00:02:15.035 LINK vbdev_zone_block_ut 00:02:15.035 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:02:15.035 CC test/unit/lib/event/app.c/app_ut.o 00:02:15.035 CC test/nvme/fdp/fdp.o 00:02:15.293 CC test/nvme/cuse/cuse.o 00:02:15.293 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:02:15.293 LINK bdev_ut 00:02:15.294 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:02:15.294 LINK bdev_raid_sb_ut 00:02:15.294 LINK fdp 00:02:15.294 LINK blobfs_async_ut 00:02:15.294 LINK app_ut 00:02:15.553 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:02:15.553 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:02:15.553 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:02:15.553 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:02:15.553 LINK reactor_ut 00:02:15.553 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:02:15.812 LINK blobfs_bdev_ut 00:02:15.812 LINK cuse 00:02:15.812 LINK concat_ut 00:02:15.812 LINK raid1_ut 00:02:15.812 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:02:15.812 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:02:15.812 LINK ioat_ut 00:02:15.812 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:02:15.812 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:02:16.072 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:02:16.072 CC test/unit/lib/log/log.c/log_ut.o 00:02:16.072 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:02:16.072 LINK log_ut 00:02:16.072 LINK jsonrpc_server_ut 00:02:16.072 LINK blobfs_sync_ut 00:02:16.072 LINK init_grp_ut 00:02:16.331 LINK json_util_ut 00:02:16.331 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:02:16.331 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:02:16.331 CC test/unit/lib/notify/notify.c/notify_ut.o 00:02:16.331 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:02:16.331 LINK conn_ut 00:02:16.591 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:02:16.591 LINK notify_ut 00:02:16.591 CC test/unit/lib/iscsi/param.c/param_ut.o 00:02:16.591 LINK json_parse_ut 00:02:16.850 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:02:16.850 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:02:16.850 LINK param_ut 00:02:16.850 LINK lvol_ut 00:02:17.108 LINK json_write_ut 00:02:17.108 LINK dev_ut 00:02:17.108 LINK blob_ut 00:02:17.108 LINK nvme_ut 00:02:17.108 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:02:17.108 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:02:17.108 CC test/unit/lib/sock/sock.c/sock_ut.o 00:02:17.369 CC test/unit/lib/thread/thread.c/thread_ut.o 00:02:17.369 LINK bdev_nvme_ut 00:02:17.369 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:02:17.369 LINK lun_ut 00:02:17.369 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:02:17.369 LINK scsi_ut 00:02:17.369 LINK portal_grp_ut 00:02:17.369 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:02:17.369 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:02:17.628 LINK iscsi_ut 00:02:17.628 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:02:17.628 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:02:17.628 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:02:17.628 LINK scsi_pr_ut 00:02:17.889 CC test/unit/lib/util/base64.c/base64_ut.o 00:02:17.889 LINK sock_ut 00:02:17.889 LINK scsi_bdev_ut 00:02:17.889 LINK nvme_ctrlr_ocssd_cmd_ut 00:02:17.889 LINK nvme_ctrlr_cmd_ut 00:02:18.187 LINK tgt_node_ut 00:02:18.187 LINK base64_ut 00:02:18.187 CC test/unit/lib/sock/posix.c/posix_ut.o 00:02:18.187 LINK tcp_ut 00:02:18.187 LINK thread_ut 00:02:18.187 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:02:18.187 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:02:18.187 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:02:18.187 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:02:18.187 LINK nvme_ns_ut 00:02:18.187 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:02:18.187 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:02:18.446 LINK pci_event_ut 00:02:18.446 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:02:18.446 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:02:18.446 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:02:18.446 LINK bit_array_ut 00:02:18.705 LINK posix_ut 00:02:18.705 LINK iobuf_ut 00:02:18.705 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:02:18.705 LINK nvme_ctrlr_ut 00:02:18.705 LINK cpuset_ut 00:02:18.963 LINK nvme_poll_group_ut 00:02:18.963 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:02:18.963 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:02:18.963 LINK nvme_ns_ocssd_cmd_ut 00:02:18.963 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:02:18.963 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:02:18.963 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:02:18.963 LINK crc16_ut 00:02:18.963 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:02:18.963 LINK nvme_ns_cmd_ut 00:02:19.222 LINK subsystem_ut 00:02:19.222 LINK nvme_pcie_ut 00:02:19.222 LINK nvme_qpair_ut 00:02:19.222 LINK rpc_ut 00:02:19.222 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:02:19.222 LINK nvme_quirks_ut 00:02:19.222 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:02:19.222 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:02:19.222 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:02:19.222 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:02:19.482 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:02:19.482 LINK crc32_ieee_ut 00:02:19.482 CC test/unit/lib/rdma/common.c/common_ut.o 00:02:19.482 LINK idxd_user_ut 00:02:19.482 LINK nvme_transport_ut 00:02:19.482 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:02:19.741 LINK nvme_io_msg_ut 00:02:19.741 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:02:19.741 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:02:19.741 LINK ctrlr_ut 00:02:19.741 LINK common_ut 00:02:19.741 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:02:19.741 LINK nvme_fabric_ut 00:02:20.000 LINK crc32c_ut 00:02:20.000 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:02:20.000 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:02:20.000 LINK nvme_pcie_common_ut 00:02:20.000 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:02:20.000 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:02:20.000 LINK idxd_ut 00:02:20.259 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:02:20.259 LINK ftl_l2p_ut 00:02:20.259 LINK nvme_opal_ut 00:02:20.259 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:02:20.259 LINK crc64_ut 00:02:20.259 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:02:20.259 LINK vhost_ut 00:02:20.259 LINK nvme_tcp_ut 00:02:20.259 CC test/unit/lib/util/dif.c/dif_ut.o 00:02:20.517 LINK ftl_bitmap_ut 00:02:20.517 CC test/unit/lib/util/iov.c/iov_ut.o 00:02:20.517 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:02:20.517 CC test/unit/lib/util/math.c/math_ut.o 00:02:20.517 LINK ftl_io_ut 00:02:20.517 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:02:20.517 LINK iov_ut 00:02:20.777 LINK math_ut 00:02:20.777 LINK nvme_cuse_ut 00:02:20.777 LINK ftl_mempool_ut 00:02:20.777 LINK ftl_band_ut 00:02:20.777 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:02:20.777 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:02:20.777 CC test/unit/lib/util/string.c/string_ut.o 00:02:21.036 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:02:21.036 CC test/unit/lib/util/xor.c/xor_ut.o 00:02:21.036 LINK nvme_rdma_ut 00:02:21.036 LINK ftl_mngt_ut 00:02:21.036 LINK subsystem_ut 00:02:21.036 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:02:21.036 LINK string_ut 00:02:21.036 LINK pipe_ut 00:02:21.036 LINK dif_ut 00:02:21.036 LINK xor_ut 00:02:21.036 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:02:21.036 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:02:21.294 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:02:21.294 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:02:21.553 LINK ftl_sb_ut 00:02:21.553 LINK ftl_layout_upgrade_ut 00:02:21.553 LINK ctrlr_bdev_ut 00:02:21.553 LINK nvmf_ut 00:02:21.847 LINK ctrlr_discovery_ut 00:02:22.423 LINK transport_ut 00:02:23.072 LINK rdma_ut 00:02:23.331 ************************************ 00:02:23.331 END TEST unittest_build 00:02:23.331 ************************************ 00:02:23.331 00:02:23.331 real 0m56.411s 00:02:23.331 user 4m28.858s 00:02:23.331 sys 1m19.653s 00:02:23.331 20:29:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:23.331 20:29:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.331 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:02:23.331 20:29:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:23.331 20:29:06 -- nvmf/common.sh@7 -- # uname -s 00:02:23.331 20:29:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:23.331 20:29:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:23.331 20:29:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:23.331 20:29:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:23.331 20:29:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:23.331 20:29:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:23.331 20:29:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:23.331 20:29:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:23.331 20:29:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:23.331 20:29:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:23.331 20:29:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ceaf7e2d-1146-43e5-860f-aacca783ceba 00:02:23.331 20:29:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=ceaf7e2d-1146-43e5-860f-aacca783ceba 00:02:23.331 20:29:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:23.331 20:29:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:23.331 20:29:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:23.331 20:29:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:23.331 20:29:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:23.331 20:29:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.331 20:29:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.331 20:29:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:23.331 20:29:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:23.331 20:29:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:23.331 20:29:06 -- paths/export.sh@5 -- # export PATH 00:02:23.331 20:29:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:02:23.331 20:29:06 -- nvmf/common.sh@46 -- # : 0 00:02:23.331 20:29:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:23.331 20:29:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:23.331 20:29:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:23.331 20:29:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:23.331 20:29:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:23.331 20:29:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:23.331 20:29:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:23.331 20:29:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:23.331 20:29:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:23.331 20:29:06 -- spdk/autotest.sh@32 -- # uname -s 00:02:23.331 20:29:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:23.331 20:29:06 -- spdk/autotest.sh@33 -- # old_core_pattern=core 00:02:23.331 20:29:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:23.331 20:29:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:23.331 20:29:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:23.331 20:29:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:23.591 modprobe: FATAL: Module nbd not found. 00:02:23.591 20:29:06 -- spdk/autotest.sh@44 -- # true 00:02:23.591 20:29:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:23.591 20:29:06 -- spdk/autotest.sh@46 -- # udevadm=/sbin/udevadm 00:02:23.591 20:29:06 -- spdk/autotest.sh@48 -- # udevadm_pid=30763 00:02:23.591 20:29:06 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:23.591 20:29:06 -- spdk/autotest.sh@47 -- # /sbin/udevadm monitor --property 00:02:23.591 20:29:06 -- spdk/autotest.sh@54 -- # echo 30765 00:02:23.591 20:29:06 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:23.591 20:29:06 -- spdk/autotest.sh@56 -- # echo 30766 00:02:23.591 20:29:06 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:23.591 20:29:06 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:23.591 20:29:06 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:23.591 20:29:06 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:23.591 20:29:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:23.591 20:29:06 -- common/autotest_common.sh@10 -- # set +x 00:02:23.591 20:29:06 -- spdk/autotest.sh@70 -- # create_test_list 00:02:23.591 20:29:06 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:23.591 20:29:06 -- common/autotest_common.sh@10 -- # set +x 00:02:23.591 20:29:06 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:23.591 20:29:06 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:23.591 20:29:06 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:02:23.591 20:29:06 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:23.591 20:29:06 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:02:23.591 20:29:06 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:23.591 20:29:06 -- common/autotest_common.sh@1440 -- # uname 00:02:23.591 20:29:06 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:23.591 20:29:06 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:23.591 20:29:06 -- common/autotest_common.sh@1460 -- # uname 00:02:23.591 20:29:06 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:23.591 20:29:06 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:23.591 20:29:06 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:23.591 20:29:06 -- spdk/autotest.sh@83 -- # hash lcov 00:02:23.591 20:29:06 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:23.591 20:29:06 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:23.591 --rc lcov_branch_coverage=1 00:02:23.591 --rc lcov_function_coverage=1 00:02:23.591 --rc genhtml_branch_coverage=1 00:02:23.591 --rc genhtml_function_coverage=1 00:02:23.591 --rc genhtml_legend=1 00:02:23.591 --rc geninfo_all_blocks=1 00:02:23.591 ' 00:02:23.591 20:29:06 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:23.591 --rc lcov_branch_coverage=1 00:02:23.591 --rc lcov_function_coverage=1 00:02:23.591 --rc genhtml_branch_coverage=1 00:02:23.591 --rc genhtml_function_coverage=1 00:02:23.591 --rc genhtml_legend=1 00:02:23.591 --rc geninfo_all_blocks=1 00:02:23.591 ' 00:02:23.591 20:29:06 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:23.591 --rc lcov_branch_coverage=1 00:02:23.591 --rc lcov_function_coverage=1 00:02:23.591 --rc genhtml_branch_coverage=1 00:02:23.591 --rc genhtml_function_coverage=1 00:02:23.591 --rc genhtml_legend=1 00:02:23.591 --rc geninfo_all_blocks=1 00:02:23.591 --no-external' 00:02:23.591 20:29:06 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:23.591 --rc lcov_branch_coverage=1 00:02:23.591 --rc lcov_function_coverage=1 00:02:23.591 --rc genhtml_branch_coverage=1 00:02:23.591 --rc genhtml_function_coverage=1 00:02:23.591 --rc genhtml_legend=1 00:02:23.591 --rc geninfo_all_blocks=1 00:02:23.591 --no-external' 00:02:23.591 20:29:06 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:23.591 lcov: LCOV version 1.15 00:02:23.591 20:29:07 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:02:31.731 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:31.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:31.731 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:31.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:31.731 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:31.731 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:46.679 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:46.679 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:02:46.680 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:46.680 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:25.410 20:30:04 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:25.410 20:30:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:25.410 20:30:04 -- common/autotest_common.sh@10 -- # set +x 00:03:25.410 20:30:04 -- spdk/autotest.sh@102 -- # rm -f 00:03:25.410 20:30:04 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:25.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:25.410 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:25.410 20:30:05 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:25.410 20:30:05 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:25.410 20:30:05 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:25.410 20:30:05 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:25.410 20:30:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:25.410 20:30:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:25.410 20:30:05 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:25.410 20:30:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.410 20:30:05 -- common/autotest_common.sh@1649 -- # return 1 00:03:25.410 20:30:05 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:25.410 20:30:05 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:25.410 20:30:05 -- spdk/autotest.sh@121 -- # grep -v p 00:03:25.410 20:30:05 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:25.410 20:30:05 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:25.410 20:30:05 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:25.410 20:30:05 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:25.410 20:30:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:25.410 No valid GPT data, bailing 00:03:25.410 20:30:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:25.410 20:30:05 -- scripts/common.sh@393 -- # pt= 00:03:25.410 20:30:05 -- scripts/common.sh@394 -- # return 1 00:03:25.410 20:30:05 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:25.410 1+0 records in 00:03:25.410 1+0 records out 00:03:25.410 1048576 bytes (1.0 MB) copied, 0.00460547 s, 228 MB/s 00:03:25.410 20:30:05 -- spdk/autotest.sh@129 -- # sync 00:03:25.410 20:30:05 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:25.410 20:30:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:25.410 20:30:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:25.410 20:30:06 -- spdk/autotest.sh@135 -- # uname -s 00:03:25.410 20:30:06 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:25.410 20:30:06 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:25.410 20:30:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.410 20:30:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.410 20:30:06 -- common/autotest_common.sh@10 -- # set +x 00:03:25.410 ************************************ 00:03:25.410 START TEST setup.sh 00:03:25.410 ************************************ 00:03:25.410 20:30:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:25.410 * Looking for test storage... 00:03:25.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:25.410 20:30:06 -- setup/test-setup.sh@10 -- # uname -s 00:03:25.410 20:30:06 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:25.410 20:30:06 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:25.410 20:30:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.410 20:30:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.410 20:30:06 -- common/autotest_common.sh@10 -- # set +x 00:03:25.410 ************************************ 00:03:25.410 START TEST acl 00:03:25.410 ************************************ 00:03:25.410 20:30:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:25.410 * Looking for test storage... 00:03:25.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:25.410 20:30:06 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:25.410 20:30:06 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:25.410 20:30:06 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:25.410 20:30:06 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:25.410 20:30:06 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:25.410 20:30:06 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:25.410 20:30:06 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:25.410 20:30:06 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.410 20:30:06 -- common/autotest_common.sh@1649 -- # return 1 00:03:25.410 20:30:06 -- setup/acl.sh@12 -- # devs=() 00:03:25.410 20:30:06 -- setup/acl.sh@12 -- # declare -a devs 00:03:25.410 20:30:06 -- setup/acl.sh@13 -- # drivers=() 00:03:25.410 20:30:06 -- setup/acl.sh@13 -- # declare -A drivers 00:03:25.410 20:30:06 -- setup/acl.sh@51 -- # setup reset 00:03:25.410 20:30:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.410 20:30:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:25.410 20:30:07 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:25.410 20:30:07 -- setup/acl.sh@16 -- # local dev driver 00:03:25.410 20:30:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.410 20:30:07 -- setup/acl.sh@15 -- # setup output status 00:03:25.410 20:30:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.410 20:30:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:25.410 Hugepages 00:03:25.410 node hugesize free / total 00:03:25.410 20:30:07 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.410 20:30:07 -- setup/acl.sh@19 -- # continue 00:03:25.410 20:30:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.410 00:03:25.410 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.410 20:30:07 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.410 20:30:07 -- setup/acl.sh@19 -- # continue 00:03:25.410 20:30:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.410 20:30:07 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:25.410 20:30:07 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:25.410 20:30:07 -- setup/acl.sh@20 -- # continue 00:03:25.410 20:30:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.410 20:30:07 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:25.410 20:30:07 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:25.410 20:30:07 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:25.410 20:30:07 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:25.410 20:30:07 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:25.410 20:30:07 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.410 20:30:07 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:25.410 20:30:07 -- setup/acl.sh@54 -- # run_test denied denied 00:03:25.410 20:30:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.410 20:30:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.410 20:30:07 -- common/autotest_common.sh@10 -- # set +x 00:03:25.410 ************************************ 00:03:25.410 START TEST denied 00:03:25.410 ************************************ 00:03:25.410 20:30:07 -- common/autotest_common.sh@1104 -- # denied 00:03:25.410 20:30:07 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:25.410 20:30:07 -- setup/acl.sh@38 -- # setup output config 00:03:25.410 20:30:07 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:25.410 20:30:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.410 20:30:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:25.410 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:25.411 20:30:07 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:25.411 20:30:07 -- setup/acl.sh@28 -- # local dev driver 00:03:25.411 20:30:07 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:25.411 20:30:07 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:25.411 20:30:07 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:25.411 20:30:07 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:25.411 20:30:07 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:25.411 20:30:07 -- setup/acl.sh@41 -- # setup reset 00:03:25.411 20:30:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.411 20:30:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:25.411 ************************************ 00:03:25.411 END TEST denied 00:03:25.411 ************************************ 00:03:25.411 00:03:25.411 real 0m0.790s 00:03:25.411 user 0m0.353s 00:03:25.411 sys 0m0.501s 00:03:25.411 20:30:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.411 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:03:25.411 20:30:08 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:25.411 20:30:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:25.411 20:30:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:25.411 20:30:08 -- common/autotest_common.sh@10 -- # set +x 00:03:25.411 ************************************ 00:03:25.411 START TEST allowed 00:03:25.411 ************************************ 00:03:25.411 20:30:08 -- common/autotest_common.sh@1104 -- # allowed 00:03:25.411 20:30:08 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:03:25.411 20:30:08 -- setup/acl.sh@45 -- # setup output config 00:03:25.411 20:30:08 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:03:25.411 20:30:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.411 20:30:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:25.669 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:25.669 20:30:08 -- setup/acl.sh@47 -- # verify 00:03:25.669 20:30:08 -- setup/acl.sh@28 -- # local dev driver 00:03:25.669 20:30:08 -- setup/acl.sh@48 -- # setup reset 00:03:25.669 20:30:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.669 20:30:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:25.927 00:03:25.927 real 0m0.947s 00:03:25.927 user 0m0.353s 00:03:25.927 sys 0m0.612s 00:03:25.927 20:30:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.927 20:30:09 -- common/autotest_common.sh@10 -- # set +x 00:03:25.927 ************************************ 00:03:25.927 END TEST allowed 00:03:25.927 ************************************ 00:03:25.927 ************************************ 00:03:25.927 END TEST acl 00:03:25.927 ************************************ 00:03:25.927 00:03:25.927 real 0m2.614s 00:03:25.927 user 0m1.094s 00:03:25.927 sys 0m1.660s 00:03:25.927 20:30:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.927 20:30:09 -- common/autotest_common.sh@10 -- # set +x 00:03:26.188 20:30:09 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:26.188 20:30:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:26.188 20:30:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:26.188 20:30:09 -- common/autotest_common.sh@10 -- # set +x 00:03:26.188 ************************************ 00:03:26.188 START TEST hugepages 00:03:26.188 ************************************ 00:03:26.188 20:30:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:26.188 * Looking for test storage... 00:03:26.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:26.188 20:30:09 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:26.188 20:30:09 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:26.188 20:30:09 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:26.188 20:30:09 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:26.188 20:30:09 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:26.188 20:30:09 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:26.188 20:30:09 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:26.188 20:30:09 -- setup/common.sh@18 -- # local node= 00:03:26.188 20:30:09 -- setup/common.sh@19 -- # local var val 00:03:26.188 20:30:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.188 20:30:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.188 20:30:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.188 20:30:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.188 20:30:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.188 20:30:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 4801292 kB' 'MemAvailable: 7428448 kB' 'Buffers: 2068 kB' 'Cached: 2818820 kB' 'SwapCached: 0 kB' 'Active: 2188560 kB' 'Inactive: 727628 kB' 'Active(anon): 95508 kB' 'Inactive(anon): 16676 kB' 'Active(file): 2093052 kB' 'Inactive(file): 710952 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 24 kB' 'Writeback: 0 kB' 'AnonPages: 95732 kB' 'Mapped: 25252 kB' 'Shmem: 16884 kB' 'Slab: 170932 kB' 'SReclaimable: 121540 kB' 'SUnreclaim: 49392 kB' 'KernelStack: 3808 kB' 'PageTables: 7616 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4053420 kB' 'Committed_AS: 342572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38768 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.188 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.188 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # continue 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.189 20:30:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.189 20:30:09 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.189 20:30:09 -- setup/common.sh@33 -- # echo 2048 00:03:26.189 20:30:09 -- setup/common.sh@33 -- # return 0 00:03:26.189 20:30:09 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:26.189 20:30:09 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:26.189 20:30:09 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:26.189 20:30:09 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:26.189 20:30:09 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:26.189 20:30:09 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:26.189 20:30:09 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:26.189 20:30:09 -- setup/hugepages.sh@207 -- # get_nodes 00:03:26.189 20:30:09 -- setup/hugepages.sh@27 -- # local node 00:03:26.189 20:30:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.189 20:30:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:26.189 20:30:09 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:26.189 20:30:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.189 20:30:09 -- setup/hugepages.sh@208 -- # clear_hp 00:03:26.189 20:30:09 -- setup/hugepages.sh@37 -- # local node hp 00:03:26.189 20:30:09 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.189 20:30:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.189 20:30:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:26.189 20:30:09 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.189 20:30:09 -- setup/hugepages.sh@41 -- # echo 0 00:03:26.189 20:30:09 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:26.190 20:30:09 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:26.190 20:30:09 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:26.190 20:30:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:26.190 20:30:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:26.190 20:30:09 -- common/autotest_common.sh@10 -- # set +x 00:03:26.190 ************************************ 00:03:26.190 START TEST default_setup 00:03:26.190 ************************************ 00:03:26.190 20:30:09 -- common/autotest_common.sh@1104 -- # default_setup 00:03:26.190 20:30:09 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:26.190 20:30:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.190 20:30:09 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.190 20:30:09 -- setup/hugepages.sh@51 -- # shift 00:03:26.190 20:30:09 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:03:26.190 20:30:09 -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.190 20:30:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.190 20:30:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.190 20:30:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.190 20:30:09 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:26.190 20:30:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.190 20:30:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.190 20:30:09 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:26.190 20:30:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.190 20:30:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.190 20:30:09 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.190 20:30:09 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.190 20:30:09 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:26.190 20:30:09 -- setup/hugepages.sh@73 -- # return 0 00:03:26.190 20:30:09 -- setup/hugepages.sh@137 -- # setup output 00:03:26.190 20:30:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.190 20:30:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:26.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:26.712 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:26.712 20:30:10 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:26.712 20:30:10 -- setup/hugepages.sh@89 -- # local node 00:03:26.712 20:30:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.712 20:30:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.712 20:30:10 -- setup/hugepages.sh@92 -- # local surp 00:03:26.712 20:30:10 -- setup/hugepages.sh@93 -- # local resv 00:03:26.712 20:30:10 -- setup/hugepages.sh@94 -- # local anon 00:03:26.712 20:30:10 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:26.712 20:30:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.712 20:30:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.712 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:26.712 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:26.712 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.712 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.712 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.712 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.712 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.712 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.712 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901876 kB' 'MemAvailable: 9529032 kB' 'Buffers: 2068 kB' 'Cached: 2818820 kB' 'SwapCached: 0 kB' 'Active: 2190380 kB' 'Inactive: 727628 kB' 'Active(anon): 97328 kB' 'Inactive(anon): 16676 kB' 'Active(file): 2093052 kB' 'Inactive(file): 710952 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 24 kB' 'Writeback: 0 kB' 'AnonPages: 95728 kB' 'Mapped: 25252 kB' 'Shmem: 16884 kB' 'Slab: 170932 kB' 'SReclaimable: 121540 kB' 'SUnreclaim: 49392 kB' 'KernelStack: 3808 kB' 'PageTables: 9072 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.712 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.712 20:30:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.713 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.713 20:30:10 -- setup/common.sh@33 -- # echo 10240 00:03:26.713 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:26.713 20:30:10 -- setup/hugepages.sh@97 -- # anon=10240 00:03:26.713 20:30:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.713 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.713 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:26.713 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:26.713 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.713 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.713 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.713 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.713 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.713 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.713 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901592 kB' 'MemAvailable: 9528816 kB' 'Buffers: 2068 kB' 'Cached: 2818884 kB' 'SwapCached: 0 kB' 'Active: 2189936 kB' 'Inactive: 727688 kB' 'Active(anon): 96880 kB' 'Inactive(anon): 16672 kB' 'Active(file): 2093056 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 95700 kB' 'Mapped: 25296 kB' 'Shmem: 16880 kB' 'Slab: 171040 kB' 'SReclaimable: 121540 kB' 'SUnreclaim: 49500 kB' 'KernelStack: 3776 kB' 'PageTables: 8732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.714 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.714 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.715 20:30:10 -- setup/common.sh@33 -- # echo 0 00:03:26.715 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:26.715 20:30:10 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.715 20:30:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.715 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.715 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:26.715 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:26.715 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.715 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.715 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.715 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.715 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.715 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901592 kB' 'MemAvailable: 9528816 kB' 'Buffers: 2068 kB' 'Cached: 2818884 kB' 'SwapCached: 0 kB' 'Active: 2190196 kB' 'Inactive: 727688 kB' 'Active(anon): 97140 kB' 'Inactive(anon): 16672 kB' 'Active(file): 2093056 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 96088 kB' 'Mapped: 25296 kB' 'Shmem: 16880 kB' 'Slab: 171040 kB' 'SReclaimable: 121540 kB' 'SUnreclaim: 49500 kB' 'KernelStack: 3776 kB' 'PageTables: 8732 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.715 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.715 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.716 20:30:10 -- setup/common.sh@33 -- # echo 0 00:03:26.716 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:26.716 nr_hugepages=1024 00:03:26.716 resv_hugepages=0 00:03:26.716 20:30:10 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.716 20:30:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.716 20:30:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.716 surplus_hugepages=0 00:03:26.716 20:30:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.716 anon_hugepages=10240 00:03:26.716 20:30:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=10240 00:03:26.716 20:30:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.716 20:30:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.716 20:30:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.716 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.716 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:26.716 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:26.716 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.716 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.716 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.716 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.716 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.716 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901580 kB' 'MemAvailable: 9528876 kB' 'Buffers: 2068 kB' 'Cached: 2818884 kB' 'SwapCached: 0 kB' 'Active: 2190016 kB' 'Inactive: 727688 kB' 'Active(anon): 96960 kB' 'Inactive(anon): 16672 kB' 'Active(file): 2093056 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'AnonPages: 95788 kB' 'Mapped: 25336 kB' 'Shmem: 16880 kB' 'Slab: 171332 kB' 'SReclaimable: 121684 kB' 'SUnreclaim: 49648 kB' 'KernelStack: 3872 kB' 'PageTables: 8244 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.716 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.716 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.717 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.717 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.718 20:30:10 -- setup/common.sh@33 -- # echo 1024 00:03:26.718 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:26.718 20:30:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.718 20:30:10 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.718 20:30:10 -- setup/hugepages.sh@27 -- # local node 00:03:26.718 20:30:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.718 20:30:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.718 20:30:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:26.718 20:30:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.718 20:30:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.718 20:30:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.718 20:30:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.718 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.718 20:30:10 -- setup/common.sh@18 -- # local node=0 00:03:26.718 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:26.718 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.718 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.718 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.718 20:30:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.718 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.718 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6902192 kB' 'MemUsed: 5398956 kB' 'Active: 2189788 kB' 'Inactive: 727692 kB' 'Active(anon): 96724 kB' 'Inactive(anon): 16676 kB' 'Active(file): 2093064 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 28 kB' 'Writeback: 0 kB' 'FilePages: 2820964 kB' 'Mapped: 25348 kB' 'AnonPages: 95924 kB' 'Shmem: 16884 kB' 'KernelStack: 3872 kB' 'PageTables: 8172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171416 kB' 'SReclaimable: 121764 kB' 'SUnreclaim: 49652 kB' 'AnonHugePages: 10240 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.718 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.718 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.719 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.719 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.719 20:30:10 -- setup/common.sh@33 -- # echo 0 00:03:26.719 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:26.719 20:30:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.719 20:30:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.719 20:30:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.719 20:30:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.719 node0=1024 expecting 1024 00:03:26.719 20:30:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:26.719 20:30:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:26.719 00:03:26.719 real 0m0.498s 00:03:26.719 user 0m0.201s 00:03:26.719 sys 0m0.293s 00:03:26.719 20:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.719 20:30:10 -- common/autotest_common.sh@10 -- # set +x 00:03:26.719 ************************************ 00:03:26.719 END TEST default_setup 00:03:26.719 ************************************ 00:03:26.719 20:30:10 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:26.719 20:30:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:26.719 20:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:26.719 20:30:10 -- common/autotest_common.sh@10 -- # set +x 00:03:26.719 ************************************ 00:03:26.719 START TEST per_node_1G_alloc 00:03:26.719 ************************************ 00:03:26.719 20:30:10 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:26.719 20:30:10 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:26.719 20:30:10 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:26.719 20:30:10 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:26.719 20:30:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.719 20:30:10 -- setup/hugepages.sh@51 -- # shift 00:03:26.719 20:30:10 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:03:26.719 20:30:10 -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.719 20:30:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.719 20:30:10 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:26.719 20:30:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.719 20:30:10 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:26.719 20:30:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.719 20:30:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:26.719 20:30:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:26.719 20:30:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.719 20:30:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.719 20:30:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.719 20:30:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.719 20:30:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:26.719 20:30:10 -- setup/hugepages.sh@73 -- # return 0 00:03:26.719 20:30:10 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:26.719 20:30:10 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:26.719 20:30:10 -- setup/hugepages.sh@146 -- # setup output 00:03:26.719 20:30:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.719 20:30:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:26.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:26.980 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:26.980 20:30:10 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:26.980 20:30:10 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:26.980 20:30:10 -- setup/hugepages.sh@89 -- # local node 00:03:26.980 20:30:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.980 20:30:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.980 20:30:10 -- setup/hugepages.sh@92 -- # local surp 00:03:26.980 20:30:10 -- setup/hugepages.sh@93 -- # local resv 00:03:26.980 20:30:10 -- setup/hugepages.sh@94 -- # local anon 00:03:26.980 20:30:10 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:26.980 20:30:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.980 20:30:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.980 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:26.980 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:26.980 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.980 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.980 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.980 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.980 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.980 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 7950876 kB' 'MemAvailable: 10578268 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2189768 kB' 'Inactive: 727700 kB' 'Active(anon): 96672 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95936 kB' 'Mapped: 25236 kB' 'Shmem: 16892 kB' 'Slab: 171548 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49752 kB' 'KernelStack: 3872 kB' 'PageTables: 8124 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.980 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.980 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.981 20:30:10 -- setup/common.sh@33 -- # echo 10240 00:03:26.981 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:26.981 20:30:10 -- setup/hugepages.sh@97 -- # anon=10240 00:03:26.981 20:30:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.981 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.981 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:26.981 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:26.981 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.981 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.981 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.981 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.981 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.981 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 7951136 kB' 'MemAvailable: 10578528 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2189508 kB' 'Inactive: 727700 kB' 'Active(anon): 96412 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95548 kB' 'Mapped: 25236 kB' 'Shmem: 16892 kB' 'Slab: 171548 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49752 kB' 'KernelStack: 3872 kB' 'PageTables: 8512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.981 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.981 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.982 20:30:10 -- setup/common.sh@32 -- # continue 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.982 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.244 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.244 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.245 20:30:10 -- setup/common.sh@33 -- # echo 0 00:03:27.245 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:27.245 20:30:10 -- setup/hugepages.sh@99 -- # surp=0 00:03:27.245 20:30:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.245 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.245 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:27.245 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:27.245 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.245 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.245 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.245 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.245 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.245 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 7951396 kB' 'MemAvailable: 10578788 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2189508 kB' 'Inactive: 727700 kB' 'Active(anon): 96412 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95936 kB' 'Mapped: 25236 kB' 'Shmem: 16892 kB' 'Slab: 171548 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49752 kB' 'KernelStack: 3872 kB' 'PageTables: 8512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.245 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.245 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.246 20:30:10 -- setup/common.sh@33 -- # echo 0 00:03:27.246 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:27.246 nr_hugepages=512 00:03:27.246 resv_hugepages=0 00:03:27.246 surplus_hugepages=0 00:03:27.246 anon_hugepages=10240 00:03:27.246 20:30:10 -- setup/hugepages.sh@100 -- # resv=0 00:03:27.246 20:30:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:27.246 20:30:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.246 20:30:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.246 20:30:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=10240 00:03:27.246 20:30:10 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:27.246 20:30:10 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:27.246 20:30:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.246 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.246 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:27.246 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:27.246 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.246 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.246 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.246 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.246 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.246 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.246 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.246 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 7951300 kB' 'MemAvailable: 10578692 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2189508 kB' 'Inactive: 727700 kB' 'Active(anon): 96412 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95548 kB' 'Mapped: 25236 kB' 'Shmem: 16892 kB' 'Slab: 171548 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49752 kB' 'KernelStack: 3872 kB' 'PageTables: 8512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.247 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.247 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.248 20:30:10 -- setup/common.sh@33 -- # echo 512 00:03:27.248 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:27.248 20:30:10 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:27.248 20:30:10 -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.248 20:30:10 -- setup/hugepages.sh@27 -- # local node 00:03:27.248 20:30:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.248 20:30:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:27.248 20:30:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:27.248 20:30:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.248 20:30:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.248 20:30:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.248 20:30:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.248 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.248 20:30:10 -- setup/common.sh@18 -- # local node=0 00:03:27.248 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:27.248 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.248 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.248 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.248 20:30:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.248 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.248 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 7951300 kB' 'MemUsed: 4349848 kB' 'Active: 2189508 kB' 'Inactive: 727700 kB' 'Active(anon): 96412 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2821004 kB' 'Mapped: 25236 kB' 'AnonPages: 95936 kB' 'Shmem: 16892 kB' 'KernelStack: 3872 kB' 'PageTables: 8512 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171548 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49752 kB' 'AnonHugePages: 10240 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.248 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.248 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.249 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.249 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.249 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.249 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.249 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.249 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.249 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.249 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.249 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.249 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.249 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.249 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.249 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.249 20:30:10 -- setup/common.sh@33 -- # echo 0 00:03:27.249 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:27.249 node0=512 expecting 512 00:03:27.249 ************************************ 00:03:27.249 END TEST per_node_1G_alloc 00:03:27.249 ************************************ 00:03:27.249 20:30:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.249 20:30:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.249 20:30:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.249 20:30:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.249 20:30:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:27.249 20:30:10 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:27.249 00:03:27.249 real 0m0.311s 00:03:27.249 user 0m0.147s 00:03:27.249 sys 0m0.197s 00:03:27.249 20:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.249 20:30:10 -- common/autotest_common.sh@10 -- # set +x 00:03:27.249 20:30:10 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:27.249 20:30:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:27.249 20:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:27.249 20:30:10 -- common/autotest_common.sh@10 -- # set +x 00:03:27.249 ************************************ 00:03:27.249 START TEST even_2G_alloc 00:03:27.249 ************************************ 00:03:27.249 20:30:10 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:27.249 20:30:10 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:27.249 20:30:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:27.249 20:30:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.249 20:30:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.249 20:30:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:27.249 20:30:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.249 20:30:10 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:27.249 20:30:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.249 20:30:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:27.249 20:30:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:27.249 20:30:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.249 20:30:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.249 20:30:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.249 20:30:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.249 20:30:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.249 20:30:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:27.249 20:30:10 -- setup/hugepages.sh@83 -- # : 0 00:03:27.249 20:30:10 -- setup/hugepages.sh@84 -- # : 0 00:03:27.249 20:30:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.249 20:30:10 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:27.249 20:30:10 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:27.249 20:30:10 -- setup/hugepages.sh@153 -- # setup output 00:03:27.249 20:30:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.249 20:30:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:27.512 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:27.513 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:27.513 20:30:10 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:27.513 20:30:10 -- setup/hugepages.sh@89 -- # local node 00:03:27.513 20:30:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.513 20:30:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.513 20:30:10 -- setup/hugepages.sh@92 -- # local surp 00:03:27.513 20:30:10 -- setup/hugepages.sh@93 -- # local resv 00:03:27.513 20:30:10 -- setup/hugepages.sh@94 -- # local anon 00:03:27.513 20:30:10 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:27.513 20:30:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.513 20:30:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.513 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:27.513 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:27.513 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.513 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.513 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.513 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.513 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.513 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901588 kB' 'MemAvailable: 9528980 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190356 kB' 'Inactive: 727700 kB' 'Active(anon): 97260 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95648 kB' 'Mapped: 25236 kB' 'Shmem: 16892 kB' 'Slab: 171548 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49752 kB' 'KernelStack: 3872 kB' 'PageTables: 8124 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.513 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.514 20:30:10 -- setup/common.sh@33 -- # echo 10240 00:03:27.514 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:27.514 20:30:10 -- setup/hugepages.sh@97 -- # anon=10240 00:03:27.514 20:30:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.514 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.514 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:27.514 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:27.514 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.514 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.514 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.514 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.514 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.514 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6902048 kB' 'MemAvailable: 9529440 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190356 kB' 'Inactive: 727700 kB' 'Active(anon): 97260 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95356 kB' 'Mapped: 25236 kB' 'Shmem: 16892 kB' 'Slab: 171548 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49752 kB' 'KernelStack: 3872 kB' 'PageTables: 8124 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 20:30:10 -- setup/common.sh@33 -- # echo 0 00:03:27.515 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:27.515 20:30:10 -- setup/hugepages.sh@99 -- # surp=0 00:03:27.515 20:30:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.515 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.515 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:27.515 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:27.515 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.515 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.515 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.515 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.515 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.515 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901996 kB' 'MemAvailable: 9529388 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190356 kB' 'Inactive: 727700 kB' 'Active(anon): 97260 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95648 kB' 'Mapped: 25236 kB' 'Shmem: 16892 kB' 'Slab: 171548 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49752 kB' 'KernelStack: 3872 kB' 'PageTables: 8416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.516 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.516 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.517 20:30:10 -- setup/common.sh@33 -- # echo 0 00:03:27.517 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:27.517 nr_hugepages=1024 00:03:27.517 resv_hugepages=0 00:03:27.517 surplus_hugepages=0 00:03:27.517 anon_hugepages=10240 00:03:27.517 20:30:10 -- setup/hugepages.sh@100 -- # resv=0 00:03:27.517 20:30:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.517 20:30:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.517 20:30:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.517 20:30:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=10240 00:03:27.517 20:30:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.517 20:30:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.517 20:30:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.517 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.517 20:30:10 -- setup/common.sh@18 -- # local node= 00:03:27.517 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:27.517 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.517 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.517 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.517 20:30:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.517 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.517 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6902148 kB' 'MemAvailable: 9529540 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190356 kB' 'Inactive: 727700 kB' 'Active(anon): 97260 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95356 kB' 'Mapped: 25236 kB' 'Shmem: 16892 kB' 'Slab: 171548 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49752 kB' 'KernelStack: 3872 kB' 'PageTables: 8416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.517 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.517 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.518 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.518 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.518 20:30:10 -- setup/common.sh@33 -- # echo 1024 00:03:27.518 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:27.518 20:30:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.518 20:30:10 -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.518 20:30:10 -- setup/hugepages.sh@27 -- # local node 00:03:27.518 20:30:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.518 20:30:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.518 20:30:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:27.518 20:30:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.518 20:30:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.519 20:30:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.519 20:30:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.519 20:30:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.519 20:30:10 -- setup/common.sh@18 -- # local node=0 00:03:27.519 20:30:10 -- setup/common.sh@19 -- # local var val 00:03:27.519 20:30:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.519 20:30:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.519 20:30:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.519 20:30:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.519 20:30:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.519 20:30:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6902112 kB' 'MemUsed: 5399036 kB' 'Active: 2190356 kB' 'Inactive: 727700 kB' 'Active(anon): 97260 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2821004 kB' 'Mapped: 25236 kB' 'AnonPages: 95648 kB' 'Shmem: 16892 kB' 'KernelStack: 3872 kB' 'PageTables: 8416 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171548 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49752 kB' 'AnonHugePages: 10240 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.519 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.519 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.520 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.520 20:30:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.520 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.520 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.520 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.520 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.520 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.520 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.520 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.520 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.520 20:30:10 -- setup/common.sh@32 -- # continue 00:03:27.520 20:30:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.520 20:30:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.520 20:30:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.520 20:30:10 -- setup/common.sh@33 -- # echo 0 00:03:27.520 20:30:10 -- setup/common.sh@33 -- # return 0 00:03:27.520 node0=1024 expecting 1024 00:03:27.520 20:30:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.520 20:30:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.520 20:30:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.520 20:30:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.520 20:30:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.520 20:30:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.520 ************************************ 00:03:27.520 END TEST even_2G_alloc 00:03:27.520 ************************************ 00:03:27.520 00:03:27.520 real 0m0.321s 00:03:27.520 user 0m0.174s 00:03:27.520 sys 0m0.182s 00:03:27.520 20:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.520 20:30:10 -- common/autotest_common.sh@10 -- # set +x 00:03:27.520 20:30:10 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:27.520 20:30:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:27.520 20:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:27.520 20:30:10 -- common/autotest_common.sh@10 -- # set +x 00:03:27.520 ************************************ 00:03:27.520 START TEST odd_alloc 00:03:27.520 ************************************ 00:03:27.520 20:30:10 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:27.520 20:30:10 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:27.520 20:30:10 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:27.520 20:30:10 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:27.520 20:30:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:27.520 20:30:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:27.520 20:30:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:27.520 20:30:10 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:27.520 20:30:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:27.520 20:30:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:27.520 20:30:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:27.520 20:30:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:27.520 20:30:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:27.520 20:30:10 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:27.520 20:30:10 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:27.520 20:30:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.520 20:30:10 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:27.520 20:30:10 -- setup/hugepages.sh@83 -- # : 0 00:03:27.520 20:30:10 -- setup/hugepages.sh@84 -- # : 0 00:03:27.520 20:30:10 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:27.520 20:30:10 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:27.520 20:30:10 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:27.520 20:30:10 -- setup/hugepages.sh@160 -- # setup output 00:03:27.520 20:30:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.520 20:30:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:27.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:27.782 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:27.782 20:30:11 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:27.782 20:30:11 -- setup/hugepages.sh@89 -- # local node 00:03:27.782 20:30:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.782 20:30:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.782 20:30:11 -- setup/hugepages.sh@92 -- # local surp 00:03:27.782 20:30:11 -- setup/hugepages.sh@93 -- # local resv 00:03:27.782 20:30:11 -- setup/hugepages.sh@94 -- # local anon 00:03:27.782 20:30:11 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:27.782 20:30:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.782 20:30:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.782 20:30:11 -- setup/common.sh@18 -- # local node= 00:03:27.782 20:30:11 -- setup/common.sh@19 -- # local var val 00:03:27.782 20:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.783 20:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.783 20:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.783 20:30:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.783 20:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.783 20:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6899736 kB' 'MemAvailable: 9527128 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2189736 kB' 'Inactive: 727700 kB' 'Active(anon): 96640 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95800 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 8112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100972 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.783 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.783 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.784 20:30:11 -- setup/common.sh@33 -- # echo 10240 00:03:27.784 20:30:11 -- setup/common.sh@33 -- # return 0 00:03:27.784 20:30:11 -- setup/hugepages.sh@97 -- # anon=10240 00:03:27.784 20:30:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.784 20:30:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.784 20:30:11 -- setup/common.sh@18 -- # local node= 00:03:27.784 20:30:11 -- setup/common.sh@19 -- # local var val 00:03:27.784 20:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.784 20:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.784 20:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.784 20:30:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.784 20:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.784 20:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6899732 kB' 'MemAvailable: 9527124 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2189996 kB' 'Inactive: 727700 kB' 'Active(anon): 96900 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96188 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 8112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100972 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.784 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.784 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.785 20:30:11 -- setup/common.sh@33 -- # echo 0 00:03:27.785 20:30:11 -- setup/common.sh@33 -- # return 0 00:03:27.785 20:30:11 -- setup/hugepages.sh@99 -- # surp=0 00:03:27.785 20:30:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.785 20:30:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.785 20:30:11 -- setup/common.sh@18 -- # local node= 00:03:27.785 20:30:11 -- setup/common.sh@19 -- # local var val 00:03:27.785 20:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.785 20:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.785 20:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.785 20:30:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.785 20:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.785 20:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6899712 kB' 'MemAvailable: 9527104 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2189736 kB' 'Inactive: 727700 kB' 'Active(anon): 96640 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95800 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 8112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100972 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.785 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.785 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.786 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.786 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.786 20:30:11 -- setup/common.sh@33 -- # echo 0 00:03:27.786 20:30:11 -- setup/common.sh@33 -- # return 0 00:03:27.786 nr_hugepages=1025 00:03:27.786 resv_hugepages=0 00:03:27.786 surplus_hugepages=0 00:03:27.786 anon_hugepages=10240 00:03:27.787 20:30:11 -- setup/hugepages.sh@100 -- # resv=0 00:03:27.787 20:30:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:27.787 20:30:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.787 20:30:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.787 20:30:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=10240 00:03:27.787 20:30:11 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.787 20:30:11 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:27.787 20:30:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.787 20:30:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.787 20:30:11 -- setup/common.sh@18 -- # local node= 00:03:27.787 20:30:11 -- setup/common.sh@19 -- # local var val 00:03:27.787 20:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.787 20:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.787 20:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.787 20:30:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.787 20:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.787 20:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6899972 kB' 'MemAvailable: 9527364 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2189996 kB' 'Inactive: 727700 kB' 'Active(anon): 96900 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 95800 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 8112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5100972 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.787 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.787 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # continue 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.788 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.788 20:30:11 -- setup/common.sh@33 -- # echo 1025 00:03:27.788 20:30:11 -- setup/common.sh@33 -- # return 0 00:03:27.788 20:30:11 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:27.788 20:30:11 -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.788 20:30:11 -- setup/hugepages.sh@27 -- # local node 00:03:27.788 20:30:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.788 20:30:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:27.788 20:30:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:27.788 20:30:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.788 20:30:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.788 20:30:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.788 20:30:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.788 20:30:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.788 20:30:11 -- setup/common.sh@18 -- # local node=0 00:03:27.788 20:30:11 -- setup/common.sh@19 -- # local var val 00:03:27.788 20:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.788 20:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.788 20:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.788 20:30:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.788 20:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.788 20:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.788 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6899584 kB' 'MemUsed: 5401564 kB' 'Active: 2189996 kB' 'Inactive: 727700 kB' 'Active(anon): 96900 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2821004 kB' 'Mapped: 25228 kB' 'AnonPages: 95800 kB' 'Shmem: 16892 kB' 'KernelStack: 3840 kB' 'PageTables: 8112 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'AnonHugePages: 10240 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.049 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.049 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.049 20:30:11 -- setup/common.sh@33 -- # echo 0 00:03:28.049 20:30:11 -- setup/common.sh@33 -- # return 0 00:03:28.049 node0=1025 expecting 1025 00:03:28.049 20:30:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.049 20:30:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.049 20:30:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.049 20:30:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.049 20:30:11 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:28.049 20:30:11 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:28.049 00:03:28.049 real 0m0.328s 00:03:28.049 user 0m0.167s 00:03:28.049 sys 0m0.196s 00:03:28.049 20:30:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.049 20:30:11 -- common/autotest_common.sh@10 -- # set +x 00:03:28.049 ************************************ 00:03:28.050 END TEST odd_alloc 00:03:28.050 ************************************ 00:03:28.050 20:30:11 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:28.050 20:30:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:28.050 20:30:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:28.050 20:30:11 -- common/autotest_common.sh@10 -- # set +x 00:03:28.050 ************************************ 00:03:28.050 START TEST custom_alloc 00:03:28.050 ************************************ 00:03:28.050 20:30:11 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:28.050 20:30:11 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:28.050 20:30:11 -- setup/hugepages.sh@169 -- # local node 00:03:28.050 20:30:11 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:28.050 20:30:11 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:28.050 20:30:11 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:28.050 20:30:11 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:28.050 20:30:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:28.050 20:30:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:28.050 20:30:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.050 20:30:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:28.050 20:30:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:28.050 20:30:11 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:28.050 20:30:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.050 20:30:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:28.050 20:30:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:28.050 20:30:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.050 20:30:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.050 20:30:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.050 20:30:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:28.050 20:30:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.050 20:30:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:28.050 20:30:11 -- setup/hugepages.sh@83 -- # : 0 00:03:28.050 20:30:11 -- setup/hugepages.sh@84 -- # : 0 00:03:28.050 20:30:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:28.050 20:30:11 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:28.050 20:30:11 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:28.050 20:30:11 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:28.050 20:30:11 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:28.050 20:30:11 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:28.050 20:30:11 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:28.050 20:30:11 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:28.050 20:30:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.050 20:30:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:28.050 20:30:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:28.050 20:30:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.050 20:30:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.050 20:30:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:28.050 20:30:11 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:28.050 20:30:11 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:28.050 20:30:11 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:28.050 20:30:11 -- setup/hugepages.sh@78 -- # return 0 00:03:28.050 20:30:11 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:28.050 20:30:11 -- setup/hugepages.sh@187 -- # setup output 00:03:28.050 20:30:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.050 20:30:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:28.050 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:28.313 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:28.313 20:30:11 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:28.313 20:30:11 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:28.313 20:30:11 -- setup/hugepages.sh@89 -- # local node 00:03:28.313 20:30:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.313 20:30:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.313 20:30:11 -- setup/hugepages.sh@92 -- # local surp 00:03:28.313 20:30:11 -- setup/hugepages.sh@93 -- # local resv 00:03:28.313 20:30:11 -- setup/hugepages.sh@94 -- # local anon 00:03:28.313 20:30:11 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:28.313 20:30:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.313 20:30:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.313 20:30:11 -- setup/common.sh@18 -- # local node= 00:03:28.313 20:30:11 -- setup/common.sh@19 -- # local var val 00:03:28.313 20:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.313 20:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.313 20:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.313 20:30:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.313 20:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.313 20:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 7951056 kB' 'MemAvailable: 10578448 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190388 kB' 'Inactive: 727700 kB' 'Active(anon): 97292 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96868 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 8208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.314 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.314 20:30:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.315 20:30:11 -- setup/common.sh@33 -- # echo 10240 00:03:28.315 20:30:11 -- setup/common.sh@33 -- # return 0 00:03:28.315 20:30:11 -- setup/hugepages.sh@97 -- # anon=10240 00:03:28.315 20:30:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.315 20:30:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.315 20:30:11 -- setup/common.sh@18 -- # local node= 00:03:28.315 20:30:11 -- setup/common.sh@19 -- # local var val 00:03:28.315 20:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.315 20:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.315 20:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.315 20:30:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.315 20:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.315 20:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 7951056 kB' 'MemAvailable: 10578448 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190388 kB' 'Inactive: 727700 kB' 'Active(anon): 97292 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96868 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 8208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.315 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.315 20:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.316 20:30:11 -- setup/common.sh@33 -- # echo 0 00:03:28.316 20:30:11 -- setup/common.sh@33 -- # return 0 00:03:28.316 20:30:11 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.316 20:30:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.316 20:30:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.316 20:30:11 -- setup/common.sh@18 -- # local node= 00:03:28.316 20:30:11 -- setup/common.sh@19 -- # local var val 00:03:28.316 20:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.316 20:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.316 20:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.316 20:30:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.316 20:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.316 20:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 7951316 kB' 'MemAvailable: 10578708 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190388 kB' 'Inactive: 727700 kB' 'Active(anon): 97292 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96480 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 8208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.316 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.316 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.317 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.317 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.317 20:30:11 -- setup/common.sh@33 -- # echo 0 00:03:28.317 20:30:11 -- setup/common.sh@33 -- # return 0 00:03:28.317 nr_hugepages=512 00:03:28.317 resv_hugepages=0 00:03:28.317 surplus_hugepages=0 00:03:28.317 anon_hugepages=10240 00:03:28.317 20:30:11 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.317 20:30:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:28.317 20:30:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.317 20:30:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.317 20:30:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=10240 00:03:28.317 20:30:11 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:28.317 20:30:11 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:28.317 20:30:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.317 20:30:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.317 20:30:11 -- setup/common.sh@18 -- # local node= 00:03:28.317 20:30:11 -- setup/common.sh@19 -- # local var val 00:03:28.317 20:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.317 20:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.317 20:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.317 20:30:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.317 20:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.318 20:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 7951580 kB' 'MemAvailable: 10578972 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190388 kB' 'Inactive: 727700 kB' 'Active(anon): 97292 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96868 kB' 'Mapped: 25228 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 7820 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5626284 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.318 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.318 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.319 20:30:11 -- setup/common.sh@33 -- # echo 512 00:03:28.319 20:30:11 -- setup/common.sh@33 -- # return 0 00:03:28.319 20:30:11 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:28.319 20:30:11 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.319 20:30:11 -- setup/hugepages.sh@27 -- # local node 00:03:28.319 20:30:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.319 20:30:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:28.319 20:30:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:28.319 20:30:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.319 20:30:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.319 20:30:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.319 20:30:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.319 20:30:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.319 20:30:11 -- setup/common.sh@18 -- # local node=0 00:03:28.319 20:30:11 -- setup/common.sh@19 -- # local var val 00:03:28.319 20:30:11 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.319 20:30:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.319 20:30:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.319 20:30:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.319 20:30:11 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.319 20:30:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 7950968 kB' 'MemUsed: 4350180 kB' 'Active: 2190388 kB' 'Inactive: 727700 kB' 'Active(anon): 97292 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2821004 kB' 'Mapped: 25228 kB' 'AnonPages: 96480 kB' 'Shmem: 16892 kB' 'KernelStack: 3840 kB' 'PageTables: 7820 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'AnonHugePages: 10240 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.319 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.319 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # continue 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.320 20:30:11 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.320 20:30:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.320 20:30:11 -- setup/common.sh@33 -- # echo 0 00:03:28.320 20:30:11 -- setup/common.sh@33 -- # return 0 00:03:28.320 20:30:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.320 20:30:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.320 20:30:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.320 node0=512 expecting 512 00:03:28.320 ************************************ 00:03:28.320 END TEST custom_alloc 00:03:28.320 ************************************ 00:03:28.320 20:30:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.320 20:30:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:28.320 20:30:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:28.320 00:03:28.320 real 0m0.325s 00:03:28.320 user 0m0.150s 00:03:28.320 sys 0m0.208s 00:03:28.320 20:30:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.320 20:30:11 -- common/autotest_common.sh@10 -- # set +x 00:03:28.320 20:30:11 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:28.320 20:30:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:28.320 20:30:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:28.320 20:30:11 -- common/autotest_common.sh@10 -- # set +x 00:03:28.320 ************************************ 00:03:28.320 START TEST no_shrink_alloc 00:03:28.320 ************************************ 00:03:28.320 20:30:11 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:28.320 20:30:11 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:28.320 20:30:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.320 20:30:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.320 20:30:11 -- setup/hugepages.sh@51 -- # shift 00:03:28.320 20:30:11 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:03:28.320 20:30:11 -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.320 20:30:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.320 20:30:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.320 20:30:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.320 20:30:11 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:03:28.320 20:30:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.320 20:30:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.320 20:30:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:28.320 20:30:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.320 20:30:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.320 20:30:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.320 20:30:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.320 20:30:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.320 20:30:11 -- setup/hugepages.sh@73 -- # return 0 00:03:28.320 20:30:11 -- setup/hugepages.sh@198 -- # setup output 00:03:28.320 20:30:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.320 20:30:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:28.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:28.583 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:28.583 20:30:12 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:28.583 20:30:12 -- setup/hugepages.sh@89 -- # local node 00:03:28.583 20:30:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.583 20:30:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.583 20:30:12 -- setup/hugepages.sh@92 -- # local surp 00:03:28.583 20:30:12 -- setup/hugepages.sh@93 -- # local resv 00:03:28.583 20:30:12 -- setup/hugepages.sh@94 -- # local anon 00:03:28.583 20:30:12 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:28.583 20:30:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.583 20:30:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.583 20:30:12 -- setup/common.sh@18 -- # local node= 00:03:28.583 20:30:12 -- setup/common.sh@19 -- # local var val 00:03:28.583 20:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.583 20:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.583 20:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.583 20:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.583 20:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.583 20:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.583 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.583 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.583 20:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901792 kB' 'MemAvailable: 9529184 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190456 kB' 'Inactive: 727700 kB' 'Active(anon): 97360 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96772 kB' 'Mapped: 25616 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 8888 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.583 20:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.583 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.583 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.583 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.584 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.584 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.585 20:30:12 -- setup/common.sh@33 -- # echo 10240 00:03:28.585 20:30:12 -- setup/common.sh@33 -- # return 0 00:03:28.585 20:30:12 -- setup/hugepages.sh@97 -- # anon=10240 00:03:28.585 20:30:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.585 20:30:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.585 20:30:12 -- setup/common.sh@18 -- # local node= 00:03:28.585 20:30:12 -- setup/common.sh@19 -- # local var val 00:03:28.585 20:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.585 20:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.585 20:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.585 20:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.585 20:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.585 20:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901676 kB' 'MemAvailable: 9529068 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190456 kB' 'Inactive: 727700 kB' 'Active(anon): 97360 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96772 kB' 'Mapped: 25616 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 8888 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.585 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.585 20:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.586 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.586 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.586 20:30:12 -- setup/common.sh@33 -- # echo 0 00:03:28.586 20:30:12 -- setup/common.sh@33 -- # return 0 00:03:28.586 20:30:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.586 20:30:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.586 20:30:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.586 20:30:12 -- setup/common.sh@18 -- # local node= 00:03:28.586 20:30:12 -- setup/common.sh@19 -- # local var val 00:03:28.586 20:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.586 20:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.587 20:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.587 20:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.587 20:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.587 20:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901872 kB' 'MemAvailable: 9529264 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190456 kB' 'Inactive: 727700 kB' 'Active(anon): 97360 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96772 kB' 'Mapped: 25616 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 8888 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.587 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.587 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.588 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.588 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.589 20:30:12 -- setup/common.sh@33 -- # echo 0 00:03:28.589 20:30:12 -- setup/common.sh@33 -- # return 0 00:03:28.589 20:30:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.589 20:30:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.589 nr_hugepages=1024 00:03:28.589 resv_hugepages=0 00:03:28.589 surplus_hugepages=0 00:03:28.589 anon_hugepages=10240 00:03:28.589 20:30:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.589 20:30:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.589 20:30:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=10240 00:03:28.589 20:30:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.589 20:30:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.589 20:30:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.589 20:30:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.589 20:30:12 -- setup/common.sh@18 -- # local node= 00:03:28.589 20:30:12 -- setup/common.sh@19 -- # local var val 00:03:28.589 20:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.589 20:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.589 20:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.589 20:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.589 20:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.589 20:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901820 kB' 'MemAvailable: 9529212 kB' 'Buffers: 2068 kB' 'Cached: 2818936 kB' 'SwapCached: 0 kB' 'Active: 2190456 kB' 'Inactive: 727700 kB' 'Active(anon): 97360 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'AnonPages: 96480 kB' 'Mapped: 25616 kB' 'Shmem: 16892 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'KernelStack: 3840 kB' 'PageTables: 9180 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 343256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.589 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.589 20:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.590 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.590 20:30:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.591 20:30:12 -- setup/common.sh@33 -- # echo 1024 00:03:28.591 20:30:12 -- setup/common.sh@33 -- # return 0 00:03:28.591 20:30:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.591 20:30:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.591 20:30:12 -- setup/hugepages.sh@27 -- # local node 00:03:28.591 20:30:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.591 20:30:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.591 20:30:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:28.591 20:30:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.591 20:30:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.591 20:30:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.591 20:30:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.591 20:30:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.591 20:30:12 -- setup/common.sh@18 -- # local node=0 00:03:28.591 20:30:12 -- setup/common.sh@19 -- # local var val 00:03:28.591 20:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.591 20:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.591 20:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.591 20:30:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.591 20:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.591 20:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.591 20:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901784 kB' 'MemUsed: 5399364 kB' 'Active: 2190456 kB' 'Inactive: 727700 kB' 'Active(anon): 97360 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 32 kB' 'Writeback: 0 kB' 'FilePages: 2821004 kB' 'Mapped: 25616 kB' 'AnonPages: 96480 kB' 'Shmem: 16892 kB' 'KernelStack: 3840 kB' 'PageTables: 9180 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171564 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49768 kB' 'AnonHugePages: 10240 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.591 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.591 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.592 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.592 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.593 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.593 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.593 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.593 20:30:12 -- setup/common.sh@33 -- # echo 0 00:03:28.593 20:30:12 -- setup/common.sh@33 -- # return 0 00:03:28.593 node0=1024 expecting 1024 00:03:28.593 20:30:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.593 20:30:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.593 20:30:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.593 20:30:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.593 20:30:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.593 20:30:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.593 20:30:12 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:28.593 20:30:12 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:28.593 20:30:12 -- setup/hugepages.sh@202 -- # setup output 00:03:28.593 20:30:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.593 20:30:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:28.925 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:28.925 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:28.925 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:28.925 20:30:12 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:28.925 20:30:12 -- setup/hugepages.sh@89 -- # local node 00:03:28.925 20:30:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.925 20:30:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.925 20:30:12 -- setup/hugepages.sh@92 -- # local surp 00:03:28.925 20:30:12 -- setup/hugepages.sh@93 -- # local resv 00:03:28.925 20:30:12 -- setup/hugepages.sh@94 -- # local anon 00:03:28.925 20:30:12 -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:03:28.925 20:30:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.925 20:30:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.925 20:30:12 -- setup/common.sh@18 -- # local node= 00:03:28.925 20:30:12 -- setup/common.sh@19 -- # local var val 00:03:28.925 20:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.925 20:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.925 20:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.925 20:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.925 20:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.925 20:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6900728 kB' 'MemAvailable: 9528124 kB' 'Buffers: 2068 kB' 'Cached: 2818940 kB' 'SwapCached: 0 kB' 'Active: 2190092 kB' 'Inactive: 727704 kB' 'Active(anon): 96996 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 96064 kB' 'Mapped: 25252 kB' 'Shmem: 16892 kB' 'Slab: 171604 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49808 kB' 'KernelStack: 3856 kB' 'PageTables: 8216 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 344308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.925 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.925 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.926 20:30:12 -- setup/common.sh@33 -- # echo 10240 00:03:28.926 20:30:12 -- setup/common.sh@33 -- # return 0 00:03:28.926 20:30:12 -- setup/hugepages.sh@97 -- # anon=10240 00:03:28.926 20:30:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.926 20:30:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.926 20:30:12 -- setup/common.sh@18 -- # local node= 00:03:28.926 20:30:12 -- setup/common.sh@19 -- # local var val 00:03:28.926 20:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.926 20:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.926 20:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.926 20:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.926 20:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.926 20:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901132 kB' 'MemAvailable: 9528528 kB' 'Buffers: 2068 kB' 'Cached: 2818940 kB' 'SwapCached: 0 kB' 'Active: 2190352 kB' 'Inactive: 727704 kB' 'Active(anon): 97256 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 95968 kB' 'Mapped: 25252 kB' 'Shmem: 16892 kB' 'Slab: 171604 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49808 kB' 'KernelStack: 3856 kB' 'PageTables: 7828 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 342912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.926 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.926 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.927 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.927 20:30:12 -- setup/common.sh@33 -- # echo 0 00:03:28.927 20:30:12 -- setup/common.sh@33 -- # return 0 00:03:28.927 20:30:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:28.927 20:30:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.927 20:30:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.927 20:30:12 -- setup/common.sh@18 -- # local node= 00:03:28.927 20:30:12 -- setup/common.sh@19 -- # local var val 00:03:28.927 20:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.927 20:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.927 20:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.927 20:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.927 20:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.927 20:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.927 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901056 kB' 'MemAvailable: 9528452 kB' 'Buffers: 2068 kB' 'Cached: 2818940 kB' 'SwapCached: 0 kB' 'Active: 2190548 kB' 'Inactive: 727704 kB' 'Active(anon): 97452 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 95968 kB' 'Mapped: 25252 kB' 'Shmem: 16892 kB' 'Slab: 171604 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49808 kB' 'KernelStack: 3856 kB' 'PageTables: 7828 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 342912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.928 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.928 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.929 20:30:12 -- setup/common.sh@33 -- # echo 0 00:03:28.929 20:30:12 -- setup/common.sh@33 -- # return 0 00:03:28.929 nr_hugepages=1024 00:03:28.929 resv_hugepages=0 00:03:28.929 surplus_hugepages=0 00:03:28.929 anon_hugepages=10240 00:03:28.929 20:30:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:28.929 20:30:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.929 20:30:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.929 20:30:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.929 20:30:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=10240 00:03:28.929 20:30:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.929 20:30:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.929 20:30:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.929 20:30:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.929 20:30:12 -- setup/common.sh@18 -- # local node= 00:03:28.929 20:30:12 -- setup/common.sh@19 -- # local var val 00:03:28.929 20:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.929 20:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.929 20:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.929 20:30:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.929 20:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.929 20:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6901032 kB' 'MemAvailable: 9528428 kB' 'Buffers: 2068 kB' 'Cached: 2818940 kB' 'SwapCached: 0 kB' 'Active: 2190548 kB' 'Inactive: 727704 kB' 'Active(anon): 97452 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 95968 kB' 'Mapped: 25252 kB' 'Shmem: 16892 kB' 'Slab: 171604 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49808 kB' 'KernelStack: 3856 kB' 'PageTables: 7828 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5101996 kB' 'Committed_AS: 342912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 38748 kB' 'VmallocChunk: 34359690404 kB' 'Percpu: 1968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 10240 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'DirectMap4k: 89964 kB' 'DirectMap2M: 5152768 kB' 'DirectMap1G: 9437184 kB' 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.929 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.929 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.930 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.930 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.930 20:30:12 -- setup/common.sh@33 -- # echo 1024 00:03:28.930 20:30:12 -- setup/common.sh@33 -- # return 0 00:03:28.930 20:30:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.930 20:30:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.930 20:30:12 -- setup/hugepages.sh@27 -- # local node 00:03:28.930 20:30:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.930 20:30:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.930 20:30:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:28.930 20:30:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.930 20:30:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.930 20:30:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.930 20:30:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.930 20:30:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.930 20:30:12 -- setup/common.sh@18 -- # local node=0 00:03:28.930 20:30:12 -- setup/common.sh@19 -- # local var val 00:03:28.930 20:30:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:28.930 20:30:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.930 20:30:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.930 20:30:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.930 20:30:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.930 20:30:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12301148 kB' 'MemFree: 6900732 kB' 'MemUsed: 5400416 kB' 'Active: 2190744 kB' 'Inactive: 727704 kB' 'Active(anon): 97648 kB' 'Inactive(anon): 16684 kB' 'Active(file): 2093096 kB' 'Inactive(file): 711020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2821008 kB' 'Mapped: 25252 kB' 'AnonPages: 95968 kB' 'Shmem: 16892 kB' 'KernelStack: 3856 kB' 'PageTables: 7828 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'Slab: 171604 kB' 'SReclaimable: 121796 kB' 'SUnreclaim: 49808 kB' 'AnonHugePages: 10240 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # continue 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:28.931 20:30:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:28.931 20:30:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.931 20:30:12 -- setup/common.sh@33 -- # echo 0 00:03:28.931 20:30:12 -- setup/common.sh@33 -- # return 0 00:03:28.931 20:30:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.931 20:30:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.931 node0=1024 expecting 1024 00:03:28.931 ************************************ 00:03:28.931 END TEST no_shrink_alloc 00:03:28.931 ************************************ 00:03:28.931 20:30:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.931 20:30:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.931 20:30:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.931 20:30:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.931 00:03:28.931 real 0m0.669s 00:03:28.931 user 0m0.337s 00:03:28.931 sys 0m0.402s 00:03:28.931 20:30:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.931 20:30:12 -- common/autotest_common.sh@10 -- # set +x 00:03:29.190 20:30:12 -- setup/hugepages.sh@217 -- # clear_hp 00:03:29.190 20:30:12 -- setup/hugepages.sh@37 -- # local node hp 00:03:29.190 20:30:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.190 20:30:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.190 20:30:12 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.190 20:30:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.190 20:30:12 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.190 20:30:12 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:29.190 20:30:12 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:29.190 ************************************ 00:03:29.190 END TEST hugepages 00:03:29.190 ************************************ 00:03:29.190 00:03:29.190 real 0m2.963s 00:03:29.190 user 0m1.372s 00:03:29.190 sys 0m1.790s 00:03:29.190 20:30:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.190 20:30:12 -- common/autotest_common.sh@10 -- # set +x 00:03:29.190 20:30:12 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:29.190 20:30:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.190 20:30:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.190 20:30:12 -- common/autotest_common.sh@10 -- # set +x 00:03:29.190 ************************************ 00:03:29.190 START TEST driver 00:03:29.190 ************************************ 00:03:29.190 20:30:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:29.190 * Looking for test storage... 00:03:29.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:29.190 20:30:12 -- setup/driver.sh@68 -- # setup reset 00:03:29.190 20:30:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.190 20:30:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:29.759 20:30:12 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:29.759 20:30:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.759 20:30:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.759 20:30:12 -- common/autotest_common.sh@10 -- # set +x 00:03:29.759 ************************************ 00:03:29.759 START TEST guess_driver 00:03:29.759 ************************************ 00:03:29.759 20:30:12 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:29.759 20:30:12 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:29.759 20:30:12 -- setup/driver.sh@47 -- # local fail=0 00:03:29.759 20:30:12 -- setup/driver.sh@49 -- # pick_driver 00:03:29.759 20:30:13 -- setup/driver.sh@36 -- # vfio 00:03:29.759 20:30:13 -- setup/driver.sh@21 -- # local iommu_grups 00:03:29.759 20:30:13 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:29.759 20:30:13 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:29.759 20:30:13 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:29.759 20:30:13 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:29.759 20:30:13 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:29.759 20:30:13 -- setup/driver.sh@32 -- # return 1 00:03:29.759 20:30:13 -- setup/driver.sh@38 -- # uio 00:03:29.759 20:30:13 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:29.759 20:30:13 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:29.759 20:30:13 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:29.759 20:30:13 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:29.759 20:30:13 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/3.10.0-1160.114.2.el7.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:29.759 insmod /lib/modules/3.10.0-1160.114.2.el7.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:29.759 20:30:13 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:29.759 Looking for driver=uio_pci_generic 00:03:29.759 20:30:13 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:29.759 20:30:13 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:29.759 20:30:13 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:29.759 20:30:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:29.759 20:30:13 -- setup/driver.sh@45 -- # setup output config 00:03:29.759 20:30:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.759 20:30:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:30.019 20:30:13 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:30.019 20:30:13 -- setup/driver.sh@58 -- # continue 00:03:30.019 20:30:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.019 20:30:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:30.019 20:30:13 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:30.019 20:30:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:30.019 20:30:13 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:30.019 20:30:13 -- setup/driver.sh@65 -- # setup reset 00:03:30.019 20:30:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.019 20:30:13 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:30.587 00:03:30.587 real 0m0.879s 00:03:30.587 user 0m0.312s 00:03:30.587 sys 0m0.565s 00:03:30.587 20:30:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.587 20:30:13 -- common/autotest_common.sh@10 -- # set +x 00:03:30.587 ************************************ 00:03:30.587 END TEST guess_driver 00:03:30.587 ************************************ 00:03:30.587 ************************************ 00:03:30.587 END TEST driver 00:03:30.587 ************************************ 00:03:30.587 00:03:30.587 real 0m1.427s 00:03:30.587 user 0m0.514s 00:03:30.587 sys 0m0.924s 00:03:30.587 20:30:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.587 20:30:13 -- common/autotest_common.sh@10 -- # set +x 00:03:30.587 20:30:13 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:30.587 20:30:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.587 20:30:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.587 20:30:13 -- common/autotest_common.sh@10 -- # set +x 00:03:30.587 ************************************ 00:03:30.587 START TEST devices 00:03:30.587 ************************************ 00:03:30.587 20:30:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:30.845 * Looking for test storage... 00:03:30.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:30.845 20:30:14 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:30.845 20:30:14 -- setup/devices.sh@192 -- # setup reset 00:03:30.845 20:30:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.845 20:30:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:31.106 20:30:14 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:31.106 20:30:14 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:31.106 20:30:14 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:31.106 20:30:14 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:31.106 20:30:14 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:31.106 20:30:14 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:31.106 20:30:14 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:31.106 20:30:14 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.106 20:30:14 -- common/autotest_common.sh@1649 -- # return 1 00:03:31.106 20:30:14 -- setup/devices.sh@196 -- # blocks=() 00:03:31.106 20:30:14 -- setup/devices.sh@196 -- # declare -a blocks 00:03:31.106 20:30:14 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:31.106 20:30:14 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:31.106 20:30:14 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:31.106 20:30:14 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:31.106 20:30:14 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:31.106 20:30:14 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:31.106 20:30:14 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:03:31.106 20:30:14 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:31.106 20:30:14 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:31.106 20:30:14 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:31.106 20:30:14 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:31.106 No valid GPT data, bailing 00:03:31.106 20:30:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:31.106 20:30:14 -- scripts/common.sh@393 -- # pt= 00:03:31.106 20:30:14 -- scripts/common.sh@394 -- # return 1 00:03:31.106 20:30:14 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:31.106 20:30:14 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:31.106 20:30:14 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:31.106 20:30:14 -- setup/common.sh@80 -- # echo 5368709120 00:03:31.106 20:30:14 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:31.106 20:30:14 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:31.106 20:30:14 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:03:31.106 20:30:14 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:31.106 20:30:14 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:31.106 20:30:14 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:31.106 20:30:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:31.106 20:30:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:31.106 20:30:14 -- common/autotest_common.sh@10 -- # set +x 00:03:31.106 ************************************ 00:03:31.106 START TEST nvme_mount 00:03:31.106 ************************************ 00:03:31.106 20:30:14 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:31.106 20:30:14 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:31.106 20:30:14 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:31.106 20:30:14 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:31.106 20:30:14 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:31.106 20:30:14 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:31.106 20:30:14 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:31.106 20:30:14 -- setup/common.sh@40 -- # local part_no=1 00:03:31.106 20:30:14 -- setup/common.sh@41 -- # local size=1073741824 00:03:31.106 20:30:14 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:31.106 20:30:14 -- setup/common.sh@44 -- # parts=() 00:03:31.106 20:30:14 -- setup/common.sh@44 -- # local parts 00:03:31.106 20:30:14 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:31.106 20:30:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.106 20:30:14 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.106 20:30:14 -- setup/common.sh@46 -- # (( part++ )) 00:03:31.106 20:30:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.106 20:30:14 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:31.106 20:30:14 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:31.106 20:30:14 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:32.482 Creating new GPT entries. 00:03:32.482 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:32.482 other utilities. 00:03:32.483 20:30:15 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:32.483 20:30:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.483 20:30:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.483 20:30:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.483 20:30:15 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:33.420 Creating new GPT entries. 00:03:33.420 The operation has completed successfully. 00:03:33.420 20:30:16 -- setup/common.sh@57 -- # (( part++ )) 00:03:33.420 20:30:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.420 20:30:16 -- setup/common.sh@62 -- # wait 34557 00:03:33.420 20:30:16 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.420 20:30:16 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:33.420 20:30:16 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.420 20:30:16 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:33.420 20:30:16 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:33.420 20:30:16 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.420 20:30:16 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:33.420 20:30:16 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:33.420 20:30:16 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:33.420 20:30:16 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.420 20:30:16 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:33.420 20:30:16 -- setup/devices.sh@53 -- # local found=0 00:03:33.420 20:30:16 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.420 20:30:16 -- setup/devices.sh@56 -- # : 00:03:33.420 20:30:16 -- setup/devices.sh@59 -- # local pci status 00:03:33.420 20:30:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.420 20:30:16 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:33.420 20:30:16 -- setup/devices.sh@47 -- # setup output config 00:03:33.420 20:30:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.420 20:30:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:33.678 20:30:16 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:33.678 20:30:16 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:33.678 20:30:16 -- setup/devices.sh@63 -- # found=1 00:03:33.678 20:30:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.678 20:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:33.678 20:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.678 20:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:33.678 20:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.678 20:30:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.678 20:30:17 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:33.678 20:30:17 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.678 20:30:17 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.678 20:30:17 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:33.678 20:30:17 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:33.678 20:30:17 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.678 20:30:17 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.678 20:30:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.678 20:30:17 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:33.678 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:33.678 20:30:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.678 20:30:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:33.937 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:33.937 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:33.937 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:33.937 /dev/nvme0n1: calling ioclt to re-read partition table: Success 00:03:33.937 20:30:17 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:33.937 20:30:17 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:33.937 20:30:17 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.937 20:30:17 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:33.937 20:30:17 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:33.937 20:30:17 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.937 20:30:17 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:33.937 20:30:17 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:33.937 20:30:17 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:33.937 20:30:17 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:33.937 20:30:17 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:33.937 20:30:17 -- setup/devices.sh@53 -- # local found=0 00:03:33.937 20:30:17 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.937 20:30:17 -- setup/devices.sh@56 -- # : 00:03:33.937 20:30:17 -- setup/devices.sh@59 -- # local pci status 00:03:33.937 20:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.937 20:30:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:33.937 20:30:17 -- setup/devices.sh@47 -- # setup output config 00:03:33.937 20:30:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.937 20:30:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:34.195 20:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.195 20:30:17 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:34.195 20:30:17 -- setup/devices.sh@63 -- # found=1 00:03:34.195 20:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.195 20:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.195 20:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.195 20:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.195 20:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.195 20:30:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.195 20:30:17 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:34.195 20:30:17 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:34.195 20:30:17 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.195 20:30:17 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:34.195 20:30:17 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:34.195 20:30:17 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:03:34.195 20:30:17 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:34.195 20:30:17 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:34.195 20:30:17 -- setup/devices.sh@50 -- # local mount_point= 00:03:34.195 20:30:17 -- setup/devices.sh@51 -- # local test_file= 00:03:34.195 20:30:17 -- setup/devices.sh@53 -- # local found=0 00:03:34.195 20:30:17 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:34.195 20:30:17 -- setup/devices.sh@59 -- # local pci status 00:03:34.195 20:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.195 20:30:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:34.195 20:30:17 -- setup/devices.sh@47 -- # setup output config 00:03:34.195 20:30:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.195 20:30:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:34.453 20:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.453 20:30:17 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:34.453 20:30:17 -- setup/devices.sh@63 -- # found=1 00:03:34.453 20:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.453 20:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.453 20:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.453 20:30:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:34.453 20:30:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.712 20:30:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.712 20:30:17 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:34.712 20:30:17 -- setup/devices.sh@68 -- # return 0 00:03:34.712 20:30:17 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:34.712 20:30:17 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:34.712 20:30:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:34.712 20:30:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:34.712 20:30:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:34.712 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:34.712 00:03:34.712 real 0m3.490s 00:03:34.712 user 0m0.544s 00:03:34.712 sys 0m0.839s 00:03:34.712 20:30:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.712 20:30:18 -- common/autotest_common.sh@10 -- # set +x 00:03:34.712 ************************************ 00:03:34.712 END TEST nvme_mount 00:03:34.712 ************************************ 00:03:34.712 20:30:18 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:34.712 20:30:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:34.712 20:30:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:34.712 20:30:18 -- common/autotest_common.sh@10 -- # set +x 00:03:34.712 ************************************ 00:03:34.712 START TEST dm_mount 00:03:34.712 ************************************ 00:03:34.712 20:30:18 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:34.712 20:30:18 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:34.712 20:30:18 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:34.712 20:30:18 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:34.712 20:30:18 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:34.712 20:30:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:34.712 20:30:18 -- setup/common.sh@40 -- # local part_no=2 00:03:34.712 20:30:18 -- setup/common.sh@41 -- # local size=1073741824 00:03:34.712 20:30:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:34.712 20:30:18 -- setup/common.sh@44 -- # parts=() 00:03:34.712 20:30:18 -- setup/common.sh@44 -- # local parts 00:03:34.712 20:30:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:34.712 20:30:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.712 20:30:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:34.712 20:30:18 -- setup/common.sh@46 -- # (( part++ )) 00:03:34.712 20:30:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.712 20:30:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:34.712 20:30:18 -- setup/common.sh@46 -- # (( part++ )) 00:03:34.712 20:30:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.712 20:30:18 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:34.712 20:30:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:34.712 20:30:18 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:35.647 Creating new GPT entries. 00:03:35.647 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:35.647 other utilities. 00:03:35.647 20:30:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:35.647 20:30:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.647 20:30:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.647 20:30:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.647 20:30:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:37.024 Creating new GPT entries. 00:03:37.024 The operation has completed successfully. 00:03:37.024 20:30:20 -- setup/common.sh@57 -- # (( part++ )) 00:03:37.024 20:30:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:37.024 20:30:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:37.024 20:30:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:37.024 20:30:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:37.961 The operation has completed successfully. 00:03:37.961 20:30:21 -- setup/common.sh@57 -- # (( part++ )) 00:03:37.961 20:30:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:37.961 20:30:21 -- setup/common.sh@62 -- # wait 34876 00:03:37.961 20:30:21 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:37.961 20:30:21 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:37.961 20:30:21 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:37.961 20:30:21 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:37.961 20:30:21 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:37.961 20:30:21 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:37.961 20:30:21 -- setup/devices.sh@161 -- # break 00:03:37.961 20:30:21 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:37.961 20:30:21 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:37.961 20:30:21 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:37.961 20:30:21 -- setup/devices.sh@166 -- # dm=dm-0 00:03:37.961 20:30:21 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:37.961 20:30:21 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:37.961 20:30:21 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:37.961 20:30:21 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:37.961 20:30:21 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:37.961 20:30:21 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:37.961 20:30:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:37.961 20:30:21 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:37.961 20:30:21 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:37.961 20:30:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:37.961 20:30:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:37.961 20:30:21 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:37.961 20:30:21 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:37.961 20:30:21 -- setup/devices.sh@53 -- # local found=0 00:03:37.961 20:30:21 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:37.961 20:30:21 -- setup/devices.sh@56 -- # : 00:03:37.961 20:30:21 -- setup/devices.sh@59 -- # local pci status 00:03:37.961 20:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.961 20:30:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:37.961 20:30:21 -- setup/devices.sh@47 -- # setup output config 00:03:37.961 20:30:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.961 20:30:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:38.220 20:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:38.220 20:30:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:38.220 20:30:21 -- setup/devices.sh@63 -- # found=1 00:03:38.220 20:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.220 20:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:38.220 20:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.220 20:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:38.220 20:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.479 20:30:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.479 20:30:21 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:38.479 20:30:21 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.479 20:30:21 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:38.479 20:30:21 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:38.479 20:30:21 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.479 20:30:21 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:38.479 20:30:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:03:38.479 20:30:21 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:38.479 20:30:21 -- setup/devices.sh@50 -- # local mount_point= 00:03:38.479 20:30:21 -- setup/devices.sh@51 -- # local test_file= 00:03:38.479 20:30:21 -- setup/devices.sh@53 -- # local found=0 00:03:38.479 20:30:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:38.479 20:30:21 -- setup/devices.sh@59 -- # local pci status 00:03:38.479 20:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.479 20:30:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:03:38.479 20:30:21 -- setup/devices.sh@47 -- # setup output config 00:03:38.479 20:30:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.479 20:30:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:38.479 20:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:38.479 20:30:21 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:38.479 20:30:21 -- setup/devices.sh@63 -- # found=1 00:03:38.479 20:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.738 20:30:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:38.738 20:30:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.738 20:30:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:03:38.738 20:30:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.738 20:30:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.738 20:30:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:38.738 20:30:22 -- setup/devices.sh@68 -- # return 0 00:03:38.738 20:30:22 -- setup/devices.sh@187 -- # cleanup_dm 00:03:38.738 20:30:22 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.738 20:30:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.738 20:30:22 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:38.738 20:30:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.738 20:30:22 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:38.738 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:38.738 20:30:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:38.738 20:30:22 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:38.738 ************************************ 00:03:38.738 END TEST dm_mount 00:03:38.738 ************************************ 00:03:38.738 00:03:38.738 real 0m4.107s 00:03:38.738 user 0m0.408s 00:03:38.738 sys 0m0.628s 00:03:38.738 20:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.738 20:30:22 -- common/autotest_common.sh@10 -- # set +x 00:03:38.998 20:30:22 -- setup/devices.sh@1 -- # cleanup 00:03:38.998 20:30:22 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:38.998 20:30:22 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:38.998 20:30:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.998 20:30:22 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:38.998 20:30:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.998 20:30:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:38.998 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:38.998 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:38.998 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:38.998 /dev/nvme0n1: calling ioclt to re-read partition table: Success 00:03:38.998 20:30:22 -- setup/devices.sh@12 -- # cleanup_dm 00:03:38.998 20:30:22 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:38.998 20:30:22 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:38.998 20:30:22 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.998 20:30:22 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:38.998 20:30:22 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.998 20:30:22 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:38.998 ************************************ 00:03:38.998 END TEST devices 00:03:38.998 ************************************ 00:03:38.998 00:03:38.998 real 0m8.325s 00:03:38.998 user 0m1.275s 00:03:38.998 sys 0m1.877s 00:03:38.998 20:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.998 20:30:22 -- common/autotest_common.sh@10 -- # set +x 00:03:38.998 00:03:38.998 real 0m15.689s 00:03:38.998 user 0m4.393s 00:03:38.998 sys 0m6.479s 00:03:38.998 20:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.998 ************************************ 00:03:38.998 END TEST setup.sh 00:03:38.998 ************************************ 00:03:38.998 20:30:22 -- common/autotest_common.sh@10 -- # set +x 00:03:38.998 20:30:22 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:39.257 Hugepages 00:03:39.257 node hugesize free / total 00:03:39.257 node0 1048576kB 0 / 0 00:03:39.257 node0 2048kB 2048 / 2048 00:03:39.257 00:03:39.257 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:39.257 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:39.257 NVMe 0000:00:06.0 1b36 0010 0 nvme nvme0 nvme0n1 00:03:39.257 20:30:22 -- spdk/autotest.sh@141 -- # uname -s 00:03:39.257 20:30:22 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:39.257 20:30:22 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:39.257 20:30:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:39.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:39.773 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:39.773 20:30:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:40.710 20:30:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:40.710 20:30:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:40.710 20:30:24 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:03:40.710 20:30:24 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:03:40.710 20:30:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:40.710 20:30:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:40.710 20:30:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:40.710 20:30:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:40.710 20:30:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:40.968 20:30:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:40.968 20:30:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:03:40.968 20:30:24 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:40.968 Waiting for block devices as requested 00:03:41.228 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:03:41.228 20:30:24 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:41.228 20:30:24 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:03:41.228 20:30:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:41.228 20:30:24 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:03:41.228 20:30:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:41.228 20:30:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:03:41.228 20:30:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:03:41.228 20:30:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:41.228 20:30:24 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:03:41.228 20:30:24 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:03:41.228 20:30:24 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:41.228 20:30:24 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:03:41.228 20:30:24 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:41.228 20:30:24 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:03:41.228 20:30:24 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:41.228 20:30:24 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:41.228 20:30:24 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:41.228 20:30:24 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:41.228 20:30:24 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:41.228 20:30:24 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:41.228 20:30:24 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:41.228 20:30:24 -- common/autotest_common.sh@1542 -- # continue 00:03:41.228 20:30:24 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:41.228 20:30:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:41.228 20:30:24 -- common/autotest_common.sh@10 -- # set +x 00:03:41.228 20:30:24 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:41.228 20:30:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:41.228 20:30:24 -- common/autotest_common.sh@10 -- # set +x 00:03:41.228 20:30:24 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:41.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:03:41.747 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:03:41.747 20:30:25 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:41.747 20:30:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:41.747 20:30:25 -- common/autotest_common.sh@10 -- # set +x 00:03:41.747 20:30:25 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:41.747 20:30:25 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:41.747 20:30:25 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:41.747 20:30:25 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:41.747 20:30:25 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:41.747 20:30:25 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:41.747 20:30:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:41.747 20:30:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:41.747 20:30:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:41.747 20:30:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:41.747 20:30:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:41.747 20:30:25 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:41.747 20:30:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:03:42.007 20:30:25 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:42.007 20:30:25 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:03:42.008 20:30:25 -- common/autotest_common.sh@1565 -- # device=0x0010 00:03:42.008 20:30:25 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:42.008 20:30:25 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:03:42.008 20:30:25 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:42.008 20:30:25 -- common/autotest_common.sh@1578 -- # return 0 00:03:42.008 20:30:25 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:03:42.008 20:30:25 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:42.008 20:30:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.008 20:30:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.008 20:30:25 -- common/autotest_common.sh@10 -- # set +x 00:03:42.008 ************************************ 00:03:42.008 START TEST unittest 00:03:42.008 ************************************ 00:03:42.008 20:30:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:42.008 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:42.008 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:03:42.008 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:03:42.008 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:42.008 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:03:42.008 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:42.008 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:03:42.008 ++ rpc_py=rpc_cmd 00:03:42.008 ++ set -e 00:03:42.008 ++ shopt -s nullglob 00:03:42.008 ++ shopt -s extglob 00:03:42.008 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:03:42.008 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:03:42.008 +++ CONFIG_RDMA=y 00:03:42.008 +++ CONFIG_UNIT_TESTS=y 00:03:42.008 +++ CONFIG_GOLANG=n 00:03:42.008 +++ CONFIG_FUSE=n 00:03:42.008 +++ CONFIG_ISAL=n 00:03:42.008 +++ CONFIG_VTUNE_DIR= 00:03:42.008 +++ CONFIG_CUSTOMOCF=n 00:03:42.008 +++ CONFIG_IPSEC_MB_DIR= 00:03:42.008 +++ CONFIG_VBDEV_COMPRESS=n 00:03:42.008 +++ CONFIG_OCF_PATH= 00:03:42.008 +++ CONFIG_SHARED=n 00:03:42.008 +++ CONFIG_DPDK_LIB_DIR= 00:03:42.008 +++ CONFIG_TESTS=y 00:03:42.008 +++ CONFIG_APPS=y 00:03:42.008 +++ CONFIG_ISAL_CRYPTO=n 00:03:42.008 +++ CONFIG_LIBDIR= 00:03:42.008 +++ CONFIG_DPDK_COMPRESSDEV=n 00:03:42.008 +++ CONFIG_DAOS_DIR= 00:03:42.008 +++ CONFIG_ISCSI_INITIATOR=n 00:03:42.008 +++ CONFIG_DPDK_PKG_CONFIG=n 00:03:42.008 +++ CONFIG_ASAN=y 00:03:42.008 +++ CONFIG_LTO=n 00:03:42.008 +++ CONFIG_CET=n 00:03:42.008 +++ CONFIG_FUZZER=n 00:03:42.008 +++ CONFIG_USDT=n 00:03:42.008 +++ CONFIG_VTUNE=n 00:03:42.008 +++ CONFIG_VHOST=y 00:03:42.008 +++ CONFIG_WPDK_DIR= 00:03:42.008 +++ CONFIG_UBLK=n 00:03:42.008 +++ CONFIG_URING=n 00:03:42.008 +++ CONFIG_SMA=n 00:03:42.008 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:03:42.008 +++ CONFIG_IDXD_KERNEL=n 00:03:42.008 +++ CONFIG_FC_PATH= 00:03:42.008 +++ CONFIG_PREFIX=/usr/local 00:03:42.008 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:03:42.008 +++ CONFIG_XNVME=n 00:03:42.008 +++ CONFIG_RDMA_PROV=verbs 00:03:42.008 +++ CONFIG_RDMA_SET_TOS=y 00:03:42.008 +++ CONFIG_FUZZER_LIB= 00:03:42.008 +++ CONFIG_HAVE_LIBARCHIVE=n 00:03:42.008 +++ CONFIG_ARCH=native 00:03:42.008 +++ CONFIG_PGO_CAPTURE=n 00:03:42.008 +++ CONFIG_DAOS=y 00:03:42.008 +++ CONFIG_WERROR=y 00:03:42.008 +++ CONFIG_DEBUG=y 00:03:42.008 +++ CONFIG_AVAHI=n 00:03:42.008 +++ CONFIG_CROSS_PREFIX= 00:03:42.008 +++ CONFIG_PGO_USE=n 00:03:42.008 +++ CONFIG_CRYPTO=n 00:03:42.008 +++ CONFIG_HAVE_ARC4RANDOM=n 00:03:42.008 +++ CONFIG_OPENSSL_PATH= 00:03:42.008 +++ CONFIG_EXAMPLES=y 00:03:42.008 +++ CONFIG_DPDK_INC_DIR= 00:03:42.008 +++ CONFIG_MAX_LCORES= 00:03:42.008 +++ CONFIG_VIRTIO=y 00:03:42.008 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:42.008 +++ CONFIG_IPSEC_MB=n 00:03:42.008 +++ CONFIG_UBSAN=n 00:03:42.008 +++ CONFIG_HAVE_EXECINFO_H=y 00:03:42.008 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:42.008 +++ CONFIG_HAVE_LIBBSD=n 00:03:42.008 +++ CONFIG_URING_PATH= 00:03:42.008 +++ CONFIG_NVME_CUSE=y 00:03:42.008 +++ CONFIG_URING_ZNS=n 00:03:42.008 +++ CONFIG_VFIO_USER=n 00:03:42.008 +++ CONFIG_FC=n 00:03:42.008 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:03:42.008 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:03:42.008 +++ CONFIG_RBD=n 00:03:42.008 +++ CONFIG_RAID5F=n 00:03:42.008 +++ CONFIG_VFIO_USER_DIR= 00:03:42.008 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:03:42.008 +++ CONFIG_TSAN=n 00:03:42.008 +++ CONFIG_IDXD=y 00:03:42.008 +++ CONFIG_OCF=n 00:03:42.008 +++ CONFIG_CRYPTO_MLX5=n 00:03:42.008 +++ CONFIG_FIO_PLUGIN=y 00:03:42.008 +++ CONFIG_COVERAGE=y 00:03:42.008 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:42.008 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:42.008 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:03:42.008 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:03:42.008 +++ _root=/home/vagrant/spdk_repo/spdk 00:03:42.008 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:03:42.008 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:03:42.008 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:03:42.008 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:03:42.008 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:03:42.008 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:03:42.008 +++ VHOST_APP=("$_app_dir/vhost") 00:03:42.008 +++ DD_APP=("$_app_dir/spdk_dd") 00:03:42.008 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:03:42.008 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:03:42.008 +++ [[ #ifndef SPDK_CONFIG_H 00:03:42.008 #define SPDK_CONFIG_H 00:03:42.008 #define SPDK_CONFIG_APPS 1 00:03:42.008 #define SPDK_CONFIG_ARCH native 00:03:42.008 #define SPDK_CONFIG_ASAN 1 00:03:42.008 #undef SPDK_CONFIG_AVAHI 00:03:42.008 #undef SPDK_CONFIG_CET 00:03:42.008 #define SPDK_CONFIG_COVERAGE 1 00:03:42.008 #define SPDK_CONFIG_CROSS_PREFIX 00:03:42.008 #undef SPDK_CONFIG_CRYPTO 00:03:42.008 #undef SPDK_CONFIG_CRYPTO_MLX5 00:03:42.008 #undef SPDK_CONFIG_CUSTOMOCF 00:03:42.008 #define SPDK_CONFIG_DAOS 1 00:03:42.008 #define SPDK_CONFIG_DAOS_DIR 00:03:42.008 #define SPDK_CONFIG_DEBUG 1 00:03:42.008 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:03:42.008 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:42.008 #define SPDK_CONFIG_DPDK_INC_DIR 00:03:42.008 #define SPDK_CONFIG_DPDK_LIB_DIR 00:03:42.008 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:03:42.008 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:42.008 #define SPDK_CONFIG_EXAMPLES 1 00:03:42.008 #undef SPDK_CONFIG_FC 00:03:42.008 #define SPDK_CONFIG_FC_PATH 00:03:42.008 #define SPDK_CONFIG_FIO_PLUGIN 1 00:03:42.008 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:03:42.008 #undef SPDK_CONFIG_FUSE 00:03:42.008 #undef SPDK_CONFIG_FUZZER 00:03:42.008 #define SPDK_CONFIG_FUZZER_LIB 00:03:42.008 #undef SPDK_CONFIG_GOLANG 00:03:42.008 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:03:42.008 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:03:42.008 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:03:42.008 #undef SPDK_CONFIG_HAVE_LIBBSD 00:03:42.008 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:03:42.008 #define SPDK_CONFIG_IDXD 1 00:03:42.008 #undef SPDK_CONFIG_IDXD_KERNEL 00:03:42.008 #undef SPDK_CONFIG_IPSEC_MB 00:03:42.008 #define SPDK_CONFIG_IPSEC_MB_DIR 00:03:42.008 #undef SPDK_CONFIG_ISAL 00:03:42.008 #undef SPDK_CONFIG_ISAL_CRYPTO 00:03:42.008 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:03:42.008 #define SPDK_CONFIG_LIBDIR 00:03:42.008 #undef SPDK_CONFIG_LTO 00:03:42.008 #define SPDK_CONFIG_MAX_LCORES 00:03:42.008 #define SPDK_CONFIG_NVME_CUSE 1 00:03:42.008 #undef SPDK_CONFIG_OCF 00:03:42.008 #define SPDK_CONFIG_OCF_PATH 00:03:42.008 #define SPDK_CONFIG_OPENSSL_PATH 00:03:42.008 #undef SPDK_CONFIG_PGO_CAPTURE 00:03:42.008 #undef SPDK_CONFIG_PGO_USE 00:03:42.008 #define SPDK_CONFIG_PREFIX /usr/local 00:03:42.008 #undef SPDK_CONFIG_RAID5F 00:03:42.008 #undef SPDK_CONFIG_RBD 00:03:42.008 #define SPDK_CONFIG_RDMA 1 00:03:42.008 #define SPDK_CONFIG_RDMA_PROV verbs 00:03:42.008 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:03:42.008 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:03:42.008 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:03:42.008 #undef SPDK_CONFIG_SHARED 00:03:42.008 #undef SPDK_CONFIG_SMA 00:03:42.008 #define SPDK_CONFIG_TESTS 1 00:03:42.008 #undef SPDK_CONFIG_TSAN 00:03:42.008 #undef SPDK_CONFIG_UBLK 00:03:42.008 #undef SPDK_CONFIG_UBSAN 00:03:42.008 #define SPDK_CONFIG_UNIT_TESTS 1 00:03:42.008 #undef SPDK_CONFIG_URING 00:03:42.008 #define SPDK_CONFIG_URING_PATH 00:03:42.008 #undef SPDK_CONFIG_URING_ZNS 00:03:42.008 #undef SPDK_CONFIG_USDT 00:03:42.008 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:03:42.008 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:03:42.008 #undef SPDK_CONFIG_VFIO_USER 00:03:42.008 #define SPDK_CONFIG_VFIO_USER_DIR 00:03:42.008 #define SPDK_CONFIG_VHOST 1 00:03:42.008 #define SPDK_CONFIG_VIRTIO 1 00:03:42.008 #undef SPDK_CONFIG_VTUNE 00:03:42.008 #define SPDK_CONFIG_VTUNE_DIR 00:03:42.008 #define SPDK_CONFIG_WERROR 1 00:03:42.008 #define SPDK_CONFIG_WPDK_DIR 00:03:42.008 #undef SPDK_CONFIG_XNVME 00:03:42.008 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:03:42.008 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:03:42.008 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:42.008 +++ [[ -e /bin/wpdk_common.sh ]] 00:03:42.008 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.008 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.008 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:42.008 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:42.008 ++++ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:42.008 ++++ export PATH 00:03:42.008 ++++ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:42.009 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:42.009 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:42.009 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:42.009 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:42.009 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:03:42.009 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:03:42.009 +++ TEST_TAG=N/A 00:03:42.009 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:03:42.009 ++ : 1 00:03:42.009 ++ export RUN_NIGHTLY 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_RUN_VALGRIND 00:03:42.009 ++ : 1 00:03:42.009 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:03:42.009 ++ : 1 00:03:42.009 ++ export SPDK_TEST_UNITTEST 00:03:42.009 ++ : 00:03:42.009 ++ export SPDK_TEST_AUTOBUILD 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_RELEASE_BUILD 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_ISAL 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_ISCSI 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_ISCSI_INITIATOR 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_NVME 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_NVME_PMR 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_NVME_BP 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_NVME_CLI 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_NVME_CUSE 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_NVME_FDP 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_NVMF 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_VFIOUSER 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_VFIOUSER_QEMU 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_FUZZER 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_FUZZER_SHORT 00:03:42.009 ++ : rdma 00:03:42.009 ++ export SPDK_TEST_NVMF_TRANSPORT 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_RBD 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_VHOST 00:03:42.009 ++ : 1 00:03:42.009 ++ export SPDK_TEST_BLOCKDEV 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_IOAT 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_BLOBFS 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_VHOST_INIT 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_LVOL 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_VBDEV_COMPRESS 00:03:42.009 ++ : 1 00:03:42.009 ++ export SPDK_RUN_ASAN 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_RUN_UBSAN 00:03:42.009 ++ : 00:03:42.009 ++ export SPDK_RUN_EXTERNAL_DPDK 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_RUN_NON_ROOT 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_CRYPTO 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_FTL 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_OCF 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_VMD 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_OPAL 00:03:42.009 ++ : 00:03:42.009 ++ export SPDK_TEST_NATIVE_DPDK 00:03:42.009 ++ : true 00:03:42.009 ++ export SPDK_AUTOTEST_X 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_RAID5 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_URING 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_USDT 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_USE_IGB_UIO 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_SCHEDULER 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_SCANBUILD 00:03:42.009 ++ : 00:03:42.009 ++ export SPDK_TEST_NVMF_NICS 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_SMA 00:03:42.009 ++ : 1 00:03:42.009 ++ export SPDK_TEST_DAOS 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_XNVME 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_ACCEL_DSA 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_ACCEL_IAA 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_ACCEL_IOAT 00:03:42.009 ++ : 00:03:42.009 ++ export SPDK_TEST_FUZZER_TARGET 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_TEST_NVMF_MDNS 00:03:42.009 ++ : 0 00:03:42.009 ++ export SPDK_JSONRPC_GO_CLIENT 00:03:42.009 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:42.009 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:42.009 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:42.009 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:42.009 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:42.009 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:42.009 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:42.009 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:42.009 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:03:42.009 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:03:42.009 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:42.009 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:42.009 ++ export PYTHONDONTWRITEBYTECODE=1 00:03:42.009 ++ PYTHONDONTWRITEBYTECODE=1 00:03:42.009 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:42.009 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:42.009 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:42.009 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:42.009 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:03:42.009 ++ rm -rf /var/tmp/asan_suppression_file 00:03:42.009 ++ cat 00:03:42.009 ++ echo leak:libfuse3.so 00:03:42.009 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:42.009 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:42.009 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:42.009 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:42.009 ++ '[' -z /var/spdk/dependencies ']' 00:03:42.009 ++ export DEPENDENCY_DIR 00:03:42.009 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:42.009 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:42.009 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:42.009 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:42.009 ++ export QEMU_BIN= 00:03:42.009 ++ QEMU_BIN= 00:03:42.009 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:42.009 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:42.009 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:42.009 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:42.009 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:42.009 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:42.009 ++ '[' 0 -eq 0 ']' 00:03:42.009 ++ export valgrind= 00:03:42.009 ++ valgrind= 00:03:42.009 +++ uname -s 00:03:42.009 ++ '[' Linux = Linux ']' 00:03:42.009 ++ HUGEMEM=4096 00:03:42.009 ++ export CLEAR_HUGE=yes 00:03:42.009 ++ CLEAR_HUGE=yes 00:03:42.009 ++ [[ 0 -eq 1 ]] 00:03:42.009 ++ [[ 0 -eq 1 ]] 00:03:42.009 ++ MAKE=make 00:03:42.009 +++ nproc 00:03:42.009 ++ MAKEFLAGS=-j10 00:03:42.009 ++ export HUGEMEM=4096 00:03:42.009 ++ HUGEMEM=4096 00:03:42.009 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:03:42.009 ++ NO_HUGE=() 00:03:42.009 ++ TEST_MODE= 00:03:42.009 ++ [[ -z '' ]] 00:03:42.009 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:42.009 ++ exec 00:03:42.009 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:42.009 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:03:42.009 ++ set_test_storage 2147483648 00:03:42.009 ++ [[ -v testdir ]] 00:03:42.009 ++ local requested_size=2147483648 00:03:42.009 ++ local mount target_dir 00:03:42.009 ++ local -A mounts fss sizes avails uses 00:03:42.009 ++ local source fs size avail mount use 00:03:42.009 ++ local storage_fallback storage_candidates 00:03:42.009 +++ mktemp -udt spdk.XXXXXX 00:03:42.009 ++ storage_fallback=/tmp/spdk.pCBGXW 00:03:42.009 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:03:42.009 ++ [[ -n '' ]] 00:03:42.009 ++ [[ -n '' ]] 00:03:42.009 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.pCBGXW/tests/unit /tmp/spdk.pCBGXW 00:03:42.009 ++ requested_size=2214592512 00:03:42.009 ++ read -r source fs size use avail _ mount 00:03:42.009 +++ df -T 00:03:42.009 +++ grep -v Filesystem 00:03:42.009 ++ mounts["$mount"]=devtmpfs 00:03:42.009 ++ fss["$mount"]=devtmpfs 00:03:42.009 ++ avails["$mount"]=6267637760 00:03:42.009 ++ sizes["$mount"]=6267637760 00:03:42.009 ++ uses["$mount"]=0 00:03:42.009 ++ read -r source fs size use avail _ mount 00:03:42.009 ++ mounts["$mount"]=tmpfs 00:03:42.009 ++ fss["$mount"]=tmpfs 00:03:42.009 ++ avails["$mount"]=6298185728 00:03:42.009 ++ sizes["$mount"]=6298185728 00:03:42.009 ++ uses["$mount"]=0 00:03:42.009 ++ read -r source fs size use avail _ mount 00:03:42.009 ++ mounts["$mount"]=tmpfs 00:03:42.009 ++ fss["$mount"]=tmpfs 00:03:42.009 ++ avails["$mount"]=6280884224 00:03:42.009 ++ sizes["$mount"]=6298185728 00:03:42.009 ++ uses["$mount"]=17301504 00:03:42.009 ++ read -r source fs size use avail _ mount 00:03:42.009 ++ mounts["$mount"]=tmpfs 00:03:42.009 ++ fss["$mount"]=tmpfs 00:03:42.009 ++ avails["$mount"]=6298185728 00:03:42.009 ++ sizes["$mount"]=6298185728 00:03:42.009 ++ uses["$mount"]=0 00:03:42.009 ++ read -r source fs size use avail _ mount 00:03:42.009 ++ mounts["$mount"]=/dev/vda1 00:03:42.009 ++ fss["$mount"]=xfs 00:03:42.009 ++ avails["$mount"]=14374301696 00:03:42.010 ++ sizes["$mount"]=21463302144 00:03:42.010 ++ uses["$mount"]=7089000448 00:03:42.010 ++ read -r source fs size use avail _ mount 00:03:42.010 ++ mounts["$mount"]=tmpfs 00:03:42.010 ++ fss["$mount"]=tmpfs 00:03:42.010 ++ avails["$mount"]=1259638784 00:03:42.010 ++ sizes["$mount"]=1259638784 00:03:42.010 ++ uses["$mount"]=0 00:03:42.010 ++ read -r source fs size use avail _ mount 00:03:42.010 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:03:42.010 ++ fss["$mount"]=fuse.sshfs 00:03:42.010 ++ avails["$mount"]=93502275584 00:03:42.010 ++ sizes["$mount"]=105088212992 00:03:42.010 ++ uses["$mount"]=6200504320 00:03:42.010 ++ read -r source fs size use avail _ mount 00:03:42.010 ++ printf '* Looking for test storage...\n' 00:03:42.010 * Looking for test storage... 00:03:42.010 ++ local target_space new_size 00:03:42.010 ++ for target_dir in "${storage_candidates[@]}" 00:03:42.010 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:03:42.010 +++ awk '$1 !~ /Filesystem/{print $6}' 00:03:42.010 ++ mount=/ 00:03:42.010 ++ target_space=14374301696 00:03:42.010 ++ (( target_space == 0 || target_space < requested_size )) 00:03:42.010 ++ (( target_space >= requested_size )) 00:03:42.010 ++ [[ xfs == tmpfs ]] 00:03:42.010 ++ [[ xfs == ramfs ]] 00:03:42.010 ++ [[ / == / ]] 00:03:42.010 ++ new_size=9303592960 00:03:42.010 ++ (( new_size * 100 / sizes[/] > 95 )) 00:03:42.010 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:42.010 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:42.010 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:03:42.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:03:42.010 ++ return 0 00:03:42.010 ++ set -o errtrace 00:03:42.010 ++ shopt -s extdebug 00:03:42.010 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:03:42.010 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:03:42.010 20:30:25 -- common/autotest_common.sh@1672 -- # true 00:03:42.010 20:30:25 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:03:42.010 20:30:25 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:03:42.010 20:30:25 -- common/autotest_common.sh@29 -- # exec 00:03:42.010 20:30:25 -- common/autotest_common.sh@31 -- # xtrace_restore 00:03:42.010 20:30:25 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:03:42.010 20:30:25 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:03:42.010 20:30:25 -- common/autotest_common.sh@18 -- # set -x 00:03:42.010 20:30:25 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:03:42.010 20:30:25 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:03:42.010 20:30:25 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:03:42.010 20:30:25 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:03:42.010 20:30:25 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:03:42.010 20:30:25 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:03:42.010 20:30:25 -- unit/unittest.sh@179 -- # hash lcov 00:03:42.010 20:30:25 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:42.010 20:30:25 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:42.010 20:30:25 -- unit/unittest.sh@180 -- # cov_avail=yes 00:03:42.010 20:30:25 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:03:42.010 20:30:25 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:03:42.010 20:30:25 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:03:42.010 20:30:25 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:03:42.010 20:30:25 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:03:42.010 --rc lcov_branch_coverage=1 00:03:42.010 --rc lcov_function_coverage=1 00:03:42.010 --rc genhtml_branch_coverage=1 00:03:42.010 --rc genhtml_function_coverage=1 00:03:42.010 --rc genhtml_legend=1 00:03:42.010 --rc geninfo_all_blocks=1 00:03:42.010 ' 00:03:42.010 20:30:25 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:03:42.010 --rc lcov_branch_coverage=1 00:03:42.010 --rc lcov_function_coverage=1 00:03:42.010 --rc genhtml_branch_coverage=1 00:03:42.010 --rc genhtml_function_coverage=1 00:03:42.010 --rc genhtml_legend=1 00:03:42.010 --rc geninfo_all_blocks=1 00:03:42.010 ' 00:03:42.010 20:30:25 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:03:42.010 --rc lcov_branch_coverage=1 00:03:42.010 --rc lcov_function_coverage=1 00:03:42.010 --rc genhtml_branch_coverage=1 00:03:42.010 --rc genhtml_function_coverage=1 00:03:42.010 --rc genhtml_legend=1 00:03:42.010 --rc geninfo_all_blocks=1 00:03:42.010 --no-external' 00:03:42.010 20:30:25 -- unit/unittest.sh@200 -- # LCOV='lcov 00:03:42.010 --rc lcov_branch_coverage=1 00:03:42.010 --rc lcov_function_coverage=1 00:03:42.010 --rc genhtml_branch_coverage=1 00:03:42.010 --rc genhtml_function_coverage=1 00:03:42.010 --rc genhtml_legend=1 00:03:42.010 --rc geninfo_all_blocks=1 00:03:42.010 --no-external' 00:03:42.010 20:30:25 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:03:50.164 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:50.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:50.164 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:50.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:50.164 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:50.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:05.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:05.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:05.045 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:05.045 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:05.046 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:05.046 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:43.763 20:31:22 -- unit/unittest.sh@206 -- # uname -m 00:04:43.763 20:31:22 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:04:43.763 20:31:22 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:04:43.763 20:31:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.763 20:31:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.763 20:31:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.763 ************************************ 00:04:43.763 START TEST unittest_pci_event 00:04:43.763 ************************************ 00:04:43.763 20:31:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:04:43.763 00:04:43.763 00:04:43.763 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.763 http://cunit.sourceforge.net/ 00:04:43.763 00:04:43.763 00:04:43.763 Suite: pci_event 00:04:43.763 Test: test_pci_parse_event ...passed 00:04:43.763 00:04:43.763 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.763 suites 1 1 n/a 0 0 00:04:43.763 tests 1 1 1 0 0 00:04:43.763 asserts 15 15 15 0 n/a 00:04:43.763 00:04:43.763 Elapsed time = 0.000 seconds 00:04:43.763 [2024-04-15 20:31:22.087938] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:04:43.763 [2024-04-15 20:31:22.088318] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:04:43.763 ************************************ 00:04:43.763 END TEST unittest_pci_event 00:04:43.763 ************************************ 00:04:43.763 00:04:43.763 real 0m0.043s 00:04:43.763 user 0m0.019s 00:04:43.763 sys 0m0.022s 00:04:43.763 20:31:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.763 20:31:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.763 20:31:22 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:04:43.763 20:31:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.763 20:31:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.763 20:31:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.763 ************************************ 00:04:43.763 START TEST unittest_include 00:04:43.763 ************************************ 00:04:43.763 20:31:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:04:43.763 00:04:43.763 00:04:43.763 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.763 http://cunit.sourceforge.net/ 00:04:43.763 00:04:43.763 00:04:43.763 Suite: histogram 00:04:43.763 Test: histogram_test ...passed 00:04:43.763 Test: histogram_merge ...passed 00:04:43.763 00:04:43.763 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.763 suites 1 1 n/a 0 0 00:04:43.763 tests 2 2 2 0 0 00:04:43.763 asserts 50 50 50 0 n/a 00:04:43.763 00:04:43.763 Elapsed time = 0.000 seconds 00:04:43.763 ************************************ 00:04:43.763 END TEST unittest_include 00:04:43.763 ************************************ 00:04:43.763 00:04:43.763 real 0m0.040s 00:04:43.763 user 0m0.020s 00:04:43.763 sys 0m0.021s 00:04:43.763 20:31:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.763 20:31:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.763 20:31:22 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:04:43.763 20:31:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.763 20:31:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.763 20:31:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.763 ************************************ 00:04:43.763 START TEST unittest_bdev 00:04:43.763 ************************************ 00:04:43.763 20:31:22 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:04:43.763 20:31:22 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:04:43.763 00:04:43.763 00:04:43.763 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.763 http://cunit.sourceforge.net/ 00:04:43.763 00:04:43.763 00:04:43.763 Suite: bdev 00:04:43.763 Test: bytes_to_blocks_test ...passed 00:04:43.763 Test: num_blocks_test ...passed 00:04:43.763 Test: io_valid_test ...passed 00:04:43.763 Test: open_write_test ...[2024-04-15 20:31:22.365074] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:04:43.763 [2024-04-15 20:31:22.365258] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:04:43.763 [2024-04-15 20:31:22.365315] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:04:43.763 passed 00:04:43.763 Test: claim_test ...passed 00:04:43.763 Test: alias_add_del_test ...[2024-04-15 20:31:22.458812] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:04:43.763 [2024-04-15 20:31:22.458902] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4578:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:04:43.763 [2024-04-15 20:31:22.458923] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:04:43.763 passed 00:04:43.763 Test: get_device_stat_test ...passed 00:04:43.763 Test: bdev_io_types_test ...passed 00:04:43.763 Test: bdev_io_wait_test ...passed 00:04:43.763 Test: bdev_io_spans_split_test ...passed 00:04:43.763 Test: bdev_io_boundary_split_test ...passed 00:04:43.763 Test: bdev_io_max_size_and_segment_split_test ...[2024-04-15 20:31:22.637911] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:04:43.763 passed 00:04:43.763 Test: bdev_io_mix_split_test ...passed 00:04:43.763 Test: bdev_io_split_with_io_wait ...passed 00:04:43.763 Test: bdev_io_write_unit_split_test ...[2024-04-15 20:31:22.774966] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:04:43.763 [2024-04-15 20:31:22.775029] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:04:43.763 [2024-04-15 20:31:22.775046] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:04:43.763 [2024-04-15 20:31:22.775064] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:04:43.763 passed 00:04:43.763 Test: bdev_io_alignment_with_boundary ...passed 00:04:43.763 Test: bdev_io_alignment ...passed 00:04:43.763 Test: bdev_histograms ...passed 00:04:43.763 Test: bdev_write_zeroes ...passed 00:04:43.763 Test: bdev_compare_and_write ...passed 00:04:43.763 Test: bdev_compare ...passed 00:04:43.763 Test: bdev_compare_emulated ...passed 00:04:43.763 Test: bdev_zcopy_write ...passed 00:04:43.763 Test: bdev_zcopy_read ...passed 00:04:43.763 Test: bdev_open_while_hotremove ...passed 00:04:43.763 Test: bdev_close_while_hotremove ...passed 00:04:43.764 Test: bdev_open_ext_test ...passed 00:04:43.764 Test: bdev_open_ext_unregister ...passed 00:04:43.764 Test: bdev_set_io_timeout ...[2024-04-15 20:31:23.294872] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:04:43.764 [2024-04-15 20:31:23.294986] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:04:43.764 passed 00:04:43.764 Test: bdev_set_qd_sampling ...passed 00:04:43.764 Test: lba_range_overlap ...passed 00:04:43.764 Test: lock_lba_range_check_ranges ...passed 00:04:43.764 Test: lock_lba_range_with_io_outstanding ...passed 00:04:43.764 Test: lock_lba_range_overlapped ...passed 00:04:43.764 Test: bdev_quiesce ...[2024-04-15 20:31:23.544586] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9964:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:04:43.764 passed 00:04:43.764 Test: bdev_io_abort ...passed 00:04:43.764 Test: bdev_unmap ...passed 00:04:43.764 Test: bdev_write_zeroes_split_test ...passed 00:04:43.764 Test: bdev_set_options_test ...passed 00:04:43.764 Test: bdev_get_memory_domains ...passed 00:04:43.764 Test: bdev_io_ext ...[2024-04-15 20:31:23.710046] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:04:43.764 passed 00:04:43.764 Test: bdev_io_ext_no_opts ...passed 00:04:43.764 Test: bdev_io_ext_invalid_opts ...passed 00:04:43.764 Test: bdev_io_ext_split ...passed 00:04:43.764 Test: bdev_io_ext_bounce_buffer ...passed 00:04:43.764 Test: bdev_register_uuid_alias ...[2024-04-15 20:31:24.001681] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 82875a73-39d0-46ee-8ba7-25858a3073e3 already exists 00:04:43.764 [2024-04-15 20:31:24.001739] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:82875a73-39d0-46ee-8ba7-25858a3073e3 alias for bdev bdev0 00:04:43.764 passed 00:04:43.764 Test: bdev_unregister_by_name ...passed 00:04:43.764 Test: for_each_bdev_test ...passed 00:04:43.764 Test: bdev_seek_test ...[2024-04-15 20:31:24.037081] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7831:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:04:43.764 [2024-04-15 20:31:24.037141] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7839:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:04:43.764 passed 00:04:43.764 Test: bdev_copy ...passed 00:04:43.764 Test: bdev_copy_split_test ...passed 00:04:43.764 Test: examine_locks ...passed 00:04:43.764 Test: claim_v2_rwo ...passed 00:04:43.764 Test: claim_v2_rom ...[2024-04-15 20:31:24.172430] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.172484] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.172499] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.172549] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.172563] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.172597] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8560:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:04:43.764 passed 00:04:43.764 Test: claim_v2_rwm ...[2024-04-15 20:31:24.173399] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.173588] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.173689] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.173758] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.173837] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:04:43.764 [2024-04-15 20:31:24.173951] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:04:43.764 [2024-04-15 20:31:24.174210] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8633:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:04:43.764 [2024-04-15 20:31:24.174336] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.174398] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.174466] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.174518] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.174590] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8653:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.174700] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8633:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:04:43.764 passed 00:04:43.764 Test: claim_v2_existing_writer ...passed 00:04:43.764 Test: claim_v2_existing_v1 ...passed 00:04:43.764 Test: claim_v1_existing_v2 ...passed 00:04:43.764 Test: examine_claimed ...[2024-04-15 20:31:24.174969] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:04:43.764 [2024-04-15 20:31:24.175586] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:04:43.764 [2024-04-15 20:31:24.175849] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.175929] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.175982] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.176203] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.176341] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.176422] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:04:43.764 [2024-04-15 20:31:24.176978] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:04:43.764 passed 00:04:43.764 00:04:43.764 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.764 suites 1 1 n/a 0 0 00:04:43.764 tests 59 59 59 0 0 00:04:43.764 asserts 4599 4599 4599 0 n/a 00:04:43.764 00:04:43.764 Elapsed time = 1.870 seconds 00:04:43.764 20:31:24 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:04:43.764 00:04:43.764 00:04:43.764 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.764 http://cunit.sourceforge.net/ 00:04:43.764 00:04:43.764 00:04:43.764 Suite: nvme 00:04:43.764 Test: test_create_ctrlr ...passed 00:04:43.764 Test: test_reset_ctrlr ...passed 00:04:43.764 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:04:43.764 Test: test_failover_ctrlr ...passed 00:04:43.764 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:04:43.764 Test: test_pending_reset ...passed 00:04:43.764 Test: test_attach_ctrlr ...[2024-04-15 20:31:24.237094] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.764 [2024-04-15 20:31:24.238363] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.764 [2024-04-15 20:31:24.238471] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.764 [2024-04-15 20:31:24.238557] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.764 [2024-04-15 20:31:24.239221] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.764 [2024-04-15 20:31:24.239310] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.764 [2024-04-15 20:31:24.239801] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4183:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:04:43.764 passed 00:04:43.764 Test: test_aer_cb ...passed 00:04:43.764 Test: test_submit_nvme_cmd ...passed 00:04:43.764 Test: test_add_remove_trid ...passed 00:04:43.764 Test: test_abort ...[2024-04-15 20:31:24.241321] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7168:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:04:43.764 passed 00:04:43.764 Test: test_get_io_qpair ...passed 00:04:43.764 Test: test_bdev_unregister ...passed 00:04:43.764 Test: test_compare_ns ...passed 00:04:43.764 Test: test_init_ana_log_page ...passed 00:04:43.764 Test: test_get_memory_domains ...passed 00:04:43.764 Test: test_reconnect_qpair ...passed 00:04:43.764 Test: test_create_bdev_ctrlr ...[2024-04-15 20:31:24.242474] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.764 [2024-04-15 20:31:24.242755] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5220:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:04:43.764 passed 00:04:43.764 Test: test_add_multi_ns_to_bdev ...[2024-04-15 20:31:24.243425] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4439:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:04:43.765 passed 00:04:43.765 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:04:43.765 Test: test_admin_path ...passed 00:04:43.765 Test: test_reset_bdev_ctrlr ...passed 00:04:43.765 Test: test_find_io_path ...passed 00:04:43.765 Test: test_retry_io_if_ana_state_is_updating ...passed 00:04:43.765 Test: test_retry_io_for_io_path_error ...passed 00:04:43.765 Test: test_retry_io_count ...passed 00:04:43.765 Test: test_concurrent_read_ana_log_page ...passed 00:04:43.765 Test: test_retry_io_for_ana_error ...passed 00:04:43.765 Test: test_check_io_error_resiliency_params ...passed 00:04:43.765 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:04:43.765 Test: test_reconnect_ctrlr ...passed 00:04:43.765 Test: test_retry_failover_ctrlr ...passed 00:04:43.765 Test: test_fail_path ...passed 00:04:43.765 Test: test_nvme_ns_cmp ...passed 00:04:43.765 Test: test_ana_transition ...passed 00:04:43.765 Test: test_set_preferred_path ...[2024-04-15 20:31:24.246575] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5873:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:04:43.765 [2024-04-15 20:31:24.246637] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5877:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:04:43.765 [2024-04-15 20:31:24.246675] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5886:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:04:43.765 [2024-04-15 20:31:24.246717] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5889:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:04:43.765 [2024-04-15 20:31:24.246740] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5901:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:04:43.765 [2024-04-15 20:31:24.246767] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5901:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:04:43.765 [2024-04-15 20:31:24.246788] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5881:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:04:43.765 [2024-04-15 20:31:24.246826] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5896:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:04:43.765 [2024-04-15 20:31:24.246858] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5893:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:04:43.765 [2024-04-15 20:31:24.247231] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.247323] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.247467] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.247529] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.247593] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.247792] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.248081] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.248149] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.248211] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.248271] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.248322] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 passed 00:04:43.765 Test: test_find_next_io_path ...passed 00:04:43.765 Test: test_find_io_path_min_qd ...passed 00:04:43.765 Test: test_disable_auto_failback ...passed 00:04:43.765 Test: test_set_multipath_policy ...passed 00:04:43.765 Test: test_uuid_generation ...passed 00:04:43.765 Test: test_retry_io_to_same_path ...[2024-04-15 20:31:24.249168] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 passed 00:04:43.765 Test: test_race_between_reset_and_disconnected ...passed 00:04:43.765 Test: test_ctrlr_op_rpc ...passed 00:04:43.765 Test: test_bdev_ctrlr_op_rpc ...passed 00:04:43.765 Test: test_disable_enable_ctrlr ...passed 00:04:43.765 Test: test_delete_ctrlr_done ...passed 00:04:43.765 00:04:43.765 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.765 suites 1 1 n/a 0 0 00:04:43.765 tests 47 47 47 0 0 00:04:43.765 asserts 3527 3527 3527 0 n/a 00:04:43.765 00:04:43.765 Elapsed time = 0.010 seconds 00:04:43.765 [2024-04-15 20:31:24.250544] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 [2024-04-15 20:31:24.250602] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:04:43.765 20:31:24 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:04:43.765 Test Options 00:04:43.765 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:04:43.765 00:04:43.765 00:04:43.765 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.765 http://cunit.sourceforge.net/ 00:04:43.765 00:04:43.765 00:04:43.765 Suite: raid 00:04:43.765 Test: test_create_raid ...passed 00:04:43.765 Test: test_create_raid_superblock ...passed 00:04:43.765 Test: test_delete_raid ...passed 00:04:43.765 Test: test_create_raid_invalid_args ...passed 00:04:43.765 Test: test_delete_raid_invalid_args ...passed 00:04:43.765 Test: test_io_channel ...passed 00:04:43.765 Test: test_reset_io ...[2024-04-15 20:31:24.283862] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:04:43.765 [2024-04-15 20:31:24.284117] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:04:43.765 [2024-04-15 20:31:24.284340] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:04:43.765 [2024-04-15 20:31:24.284471] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:04:43.765 [2024-04-15 20:31:24.285106] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:04:43.765 passed 00:04:43.765 Test: test_write_io ...passed 00:04:43.765 Test: test_read_io ...passed 00:04:43.765 Test: test_unmap_io ...passed 00:04:43.765 Test: test_io_failure ...passed 00:04:43.765 Test: test_multi_raid_no_io ...[2024-04-15 20:31:25.245423] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:04:43.765 passed 00:04:43.765 Test: test_multi_raid_with_io ...passed 00:04:43.765 Test: test_io_type_supported ...passed 00:04:43.765 Test: test_raid_json_dump_info ...passed 00:04:43.765 Test: test_context_size ...passed 00:04:43.765 Test: test_raid_level_conversions ...passed 00:04:43.765 Test: test_raid_process ...passed 00:04:43.765 Test: test_raid_io_split ...passed 00:04:43.765 00:04:43.765 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.765 suites 1 1 n/a 0 0 00:04:43.765 tests 19 19 19 0 0 00:04:43.765 asserts 177879 177879 177879 0 n/a 00:04:43.765 00:04:43.765 Elapsed time = 0.980 seconds 00:04:43.765 20:31:25 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:04:43.765 00:04:43.765 00:04:43.765 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.765 http://cunit.sourceforge.net/ 00:04:43.765 00:04:43.765 00:04:43.765 Suite: raid_sb 00:04:43.765 Test: test_raid_bdev_write_superblock ...passed 00:04:43.765 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:04:43.765 Test: test_raid_bdev_parse_superblock ...passed 00:04:43.765 00:04:43.765 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.765 suites 1 1 n/a 0 0 00:04:43.765 tests 3 3 3 0 0 00:04:43.765 asserts 32 32 32 0 n/a 00:04:43.765 00:04:43.765 Elapsed time = 0.000 seconds 00:04:43.765 [2024-04-15 20:31:25.290162] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:04:43.765 20:31:25 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:04:43.765 00:04:43.765 00:04:43.765 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.765 http://cunit.sourceforge.net/ 00:04:43.765 00:04:43.765 00:04:43.765 Suite: concat 00:04:43.765 Test: test_concat_start ...passed 00:04:43.765 Test: test_concat_rw ...passed 00:04:43.765 Test: test_concat_null_payload ...passed 00:04:43.765 00:04:43.765 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.765 suites 1 1 n/a 0 0 00:04:43.765 tests 3 3 3 0 0 00:04:43.765 asserts 8097 8097 8097 0 n/a 00:04:43.765 00:04:43.765 Elapsed time = 0.010 seconds 00:04:43.766 20:31:25 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:04:43.766 00:04:43.766 00:04:43.766 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.766 http://cunit.sourceforge.net/ 00:04:43.766 00:04:43.766 00:04:43.766 Suite: raid1 00:04:43.766 Test: test_raid1_start ...passed 00:04:43.766 Test: test_raid1_read_balancing ...passed 00:04:43.766 00:04:43.766 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.766 suites 1 1 n/a 0 0 00:04:43.766 tests 2 2 2 0 0 00:04:43.766 asserts 2856 2856 2856 0 n/a 00:04:43.766 00:04:43.766 Elapsed time = 0.000 seconds 00:04:43.766 20:31:25 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:04:43.766 00:04:43.766 00:04:43.766 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.766 http://cunit.sourceforge.net/ 00:04:43.766 00:04:43.766 00:04:43.766 Suite: zone 00:04:43.766 Test: test_zone_get_operation ...passed 00:04:43.766 Test: test_bdev_zone_get_info ...passed 00:04:43.766 Test: test_bdev_zone_management ...passed 00:04:43.766 Test: test_bdev_zone_append ...passed 00:04:43.766 Test: test_bdev_zone_append_with_md ...passed 00:04:43.766 Test: test_bdev_zone_appendv ...passed 00:04:43.766 Test: test_bdev_zone_appendv_with_md ...passed 00:04:43.766 Test: test_bdev_io_get_append_location ...passed 00:04:43.766 00:04:43.766 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.766 suites 1 1 n/a 0 0 00:04:43.766 tests 8 8 8 0 0 00:04:43.766 asserts 94 94 94 0 n/a 00:04:43.766 00:04:43.766 Elapsed time = 0.000 seconds 00:04:43.766 20:31:25 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:04:43.766 00:04:43.766 00:04:43.766 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.766 http://cunit.sourceforge.net/ 00:04:43.766 00:04:43.766 00:04:43.766 Suite: gpt_parse 00:04:43.766 Test: test_parse_mbr_and_primary ...[2024-04-15 20:31:25.397559] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:04:43.766 [2024-04-15 20:31:25.397896] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:04:43.766 [2024-04-15 20:31:25.397948] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:04:43.766 passed 00:04:43.766 Test: test_parse_secondary ...passed 00:04:43.766 Test: test_check_mbr ...passed 00:04:43.766 Test: test_read_header ...passed 00:04:43.766 Test: test_read_partitions ...passed 00:04:43.766 00:04:43.766 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.766 suites 1 1 n/a 0 0 00:04:43.766 tests 5 5 5 0 0 00:04:43.766 asserts 33 33 33 0 n/a 00:04:43.766 00:04:43.766 Elapsed time = 0.000 seconds 00:04:43.766 [2024-04-15 20:31:25.398040] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:04:43.766 [2024-04-15 20:31:25.398085] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:04:43.766 [2024-04-15 20:31:25.398165] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:04:43.766 [2024-04-15 20:31:25.398455] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:04:43.766 [2024-04-15 20:31:25.398502] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:04:43.766 [2024-04-15 20:31:25.398534] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:04:43.766 [2024-04-15 20:31:25.398567] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:04:43.766 [2024-04-15 20:31:25.398868] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:04:43.766 [2024-04-15 20:31:25.398903] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:04:43.766 [2024-04-15 20:31:25.398941] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:04:43.766 [2024-04-15 20:31:25.399043] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:04:43.766 [2024-04-15 20:31:25.399129] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:04:43.766 [2024-04-15 20:31:25.399167] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:04:43.766 [2024-04-15 20:31:25.399196] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:04:43.766 [2024-04-15 20:31:25.399230] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:04:43.766 [2024-04-15 20:31:25.399269] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:04:43.766 [2024-04-15 20:31:25.399323] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:04:43.766 [2024-04-15 20:31:25.399358] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:04:43.766 [2024-04-15 20:31:25.399383] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:04:43.766 [2024-04-15 20:31:25.399540] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:04:43.766 20:31:25 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:04:43.766 00:04:43.766 00:04:43.766 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.766 http://cunit.sourceforge.net/ 00:04:43.766 00:04:43.766 00:04:43.766 Suite: bdev_part 00:04:43.766 Test: part_test ...passed 00:04:43.766 Test: part_free_test ...[2024-04-15 20:31:25.423341] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:04:43.766 passed 00:04:43.766 Test: part_get_io_channel_test ...passed 00:04:43.766 Test: part_construct_ext ...passed 00:04:43.766 00:04:43.766 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.766 suites 1 1 n/a 0 0 00:04:43.766 tests 4 4 4 0 0 00:04:43.766 asserts 48 48 48 0 n/a 00:04:43.766 00:04:43.766 Elapsed time = 0.040 seconds 00:04:43.766 20:31:25 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:04:43.766 00:04:43.766 00:04:43.766 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.766 http://cunit.sourceforge.net/ 00:04:43.766 00:04:43.766 00:04:43.766 Suite: scsi_nvme_suite 00:04:43.766 Test: scsi_nvme_translate_test ...passed 00:04:43.766 00:04:43.766 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.766 suites 1 1 n/a 0 0 00:04:43.766 tests 1 1 1 0 0 00:04:43.766 asserts 104 104 104 0 n/a 00:04:43.766 00:04:43.766 Elapsed time = 0.000 seconds 00:04:43.766 20:31:25 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:04:43.766 00:04:43.766 00:04:43.766 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.766 http://cunit.sourceforge.net/ 00:04:43.766 00:04:43.766 00:04:43.766 Suite: lvol 00:04:43.766 Test: ut_lvs_init ...passed 00:04:43.766 Test: ut_lvol_init ...passed 00:04:43.766 Test: ut_lvol_snapshot ...passed 00:04:43.766 Test: ut_lvol_clone ...passed 00:04:43.766 Test: ut_lvs_destroy ...passed 00:04:43.766 Test: ut_lvs_unload ...passed 00:04:43.766 Test: ut_lvol_resize ...[2024-04-15 20:31:25.499029] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:04:43.766 [2024-04-15 20:31:25.499301] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:04:43.766 [2024-04-15 20:31:25.499787] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:04:43.766 passed 00:04:43.766 Test: ut_lvol_set_read_only ...passed 00:04:43.766 Test: ut_lvol_hotremove ...passed 00:04:43.766 Test: ut_vbdev_lvol_get_io_channel ...passed 00:04:43.766 Test: ut_vbdev_lvol_io_type_supported ...passed 00:04:43.766 Test: ut_lvol_read_write ...passed 00:04:43.766 Test: ut_vbdev_lvol_submit_request ...passed 00:04:43.766 Test: ut_lvol_examine_config ...passed 00:04:43.767 Test: ut_lvol_examine_disk ...[2024-04-15 20:31:25.500059] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:04:43.767 passed 00:04:43.767 Test: ut_lvol_rename ...passed 00:04:43.767 Test: ut_bdev_finish ...passed 00:04:43.767 Test: ut_lvs_rename ...passed 00:04:43.767 Test: ut_lvol_seek ...passed 00:04:43.767 Test: ut_esnap_dev_create ...passed 00:04:43.767 Test: ut_lvol_esnap_clone_bad_args ...passed 00:04:43.767 00:04:43.767 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.767 suites 1 1 n/a 0 0 00:04:43.767 tests 21 21 21 0 0 00:04:43.767 asserts 712 712 712 0 n/a 00:04:43.767 00:04:43.767 Elapsed time = 0.000 seconds 00:04:43.767 [2024-04-15 20:31:25.500434] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:04:43.767 [2024-04-15 20:31:25.500499] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:04:43.767 [2024-04-15 20:31:25.500753] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:04:43.767 [2024-04-15 20:31:25.500792] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:04:43.767 [2024-04-15 20:31:25.500814] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:04:43.767 [2024-04-15 20:31:25.500850] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:04:43.767 [2024-04-15 20:31:25.500938] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:04:43.767 [2024-04-15 20:31:25.500965] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:04:43.767 20:31:25 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:04:43.767 00:04:43.767 00:04:43.767 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.767 http://cunit.sourceforge.net/ 00:04:43.767 00:04:43.767 00:04:43.767 Suite: zone_block 00:04:43.767 Test: test_zone_block_create ...passed 00:04:43.767 Test: test_zone_block_create_invalid ...passed 00:04:43.767 Test: test_get_zone_info ...passed 00:04:43.767 Test: test_supported_io_types ...[2024-04-15 20:31:25.545199] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:04:43.767 [2024-04-15 20:31:25.545381] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-15 20:31:25.545451] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:04:43.767 [2024-04-15 20:31:25.545483] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-04-15 20:31:25.545518] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:04:43.767 [2024-04-15 20:31:25.545544] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-04-15 20:31:25.545563] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:04:43.767 [2024-04-15 20:31:25.545596] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-04-15 20:31:25.545806] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.545838] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.545862] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 passed 00:04:43.767 Test: test_reset_zone ...passed 00:04:43.767 Test: test_open_zone ...[2024-04-15 20:31:25.546090] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.546112] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.546253] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.546549] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.546580] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 passed 00:04:43.767 Test: test_zone_write ...[2024-04-15 20:31:25.546763] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:04:43.767 [2024-04-15 20:31:25.546787] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.546818] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:04:43.767 [2024-04-15 20:31:25.546847] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.550831] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:04:43.767 [2024-04-15 20:31:25.550866] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.550906] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:04:43.767 [2024-04-15 20:31:25.550926] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.555142] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:04:43.767 [2024-04-15 20:31:25.555186] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 passed 00:04:43.767 Test: test_zone_read ...passed 00:04:43.767 Test: test_close_zone ...[2024-04-15 20:31:25.555386] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:04:43.767 [2024-04-15 20:31:25.555407] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.555441] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:04:43.767 [2024-04-15 20:31:25.555460] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.555638] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:04:43.767 [2024-04-15 20:31:25.555666] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 passed 00:04:43.767 Test: test_finish_zone ...[2024-04-15 20:31:25.555793] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.555822] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.555890] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.555910] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.556079] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 passed 00:04:43.767 Test: test_append_zone ...[2024-04-15 20:31:25.556099] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.556252] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:04:43.767 [2024-04-15 20:31:25.556283] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 [2024-04-15 20:31:25.556320] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:04:43.767 [2024-04-15 20:31:25.556341] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 passed 00:04:43.767 00:04:43.767 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.767 suites 1 1 n/a 0 0 00:04:43.767 tests 11 11 11 0 0 00:04:43.767 asserts 3437 3437 3437 0 n/a 00:04:43.767 00:04:43.767 Elapsed time = 0.020 seconds 00:04:43.767 [2024-04-15 20:31:25.564786] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:04:43.767 [2024-04-15 20:31:25.564819] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:04:43.767 20:31:25 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:04:43.767 00:04:43.767 00:04:43.767 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.767 http://cunit.sourceforge.net/ 00:04:43.767 00:04:43.767 00:04:43.767 Suite: bdev 00:04:43.768 Test: basic ...[2024-04-15 20:31:25.667894] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x51d421): Operation not permitted (rc=-1) 00:04:43.768 [2024-04-15 20:31:25.668057] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x51d3e0): Operation not permitted (rc=-1) 00:04:43.768 [2024-04-15 20:31:25.668080] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x51d421): Operation not permitted (rc=-1) 00:04:43.768 passed 00:04:43.768 Test: unregister_and_close ...passed 00:04:43.768 Test: unregister_and_close_different_threads ...passed 00:04:43.768 Test: basic_qos ...passed 00:04:43.768 Test: put_channel_during_reset ...passed 00:04:43.768 Test: aborted_reset ...passed 00:04:43.768 Test: aborted_reset_no_outstanding_io ...passed 00:04:43.768 Test: io_during_reset ...passed 00:04:43.768 Test: reset_completions ...passed 00:04:43.768 Test: io_during_qos_queue ...passed 00:04:43.768 Test: io_during_qos_reset ...passed 00:04:43.768 Test: enomem ...passed 00:04:43.768 Test: enomem_multi_bdev ...passed 00:04:43.768 Test: enomem_multi_bdev_unregister ...passed 00:04:43.768 Test: enomem_multi_io_target ...passed 00:04:43.768 Test: qos_dynamic_enable ...passed 00:04:43.768 Test: bdev_histograms_mt ...passed 00:04:43.768 Test: bdev_set_io_timeout_mt ...passed 00:04:43.768 Test: lock_lba_range_then_submit_io ...[2024-04-15 20:31:26.508243] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:04:43.768 [2024-04-15 20:31:26.528902] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x51d3a0 already registered (old:0x6130000003c0 new:0x613000000c80) 00:04:43.768 passed 00:04:43.768 Test: unregister_during_reset ...passed 00:04:43.768 Test: event_notify_and_close ...passed 00:04:43.768 Suite: bdev_wrong_thread 00:04:43.768 Test: spdk_bdev_register_wt ...passed 00:04:43.768 Test: spdk_bdev_examine_wt ...passed 00:04:43.768 00:04:43.768 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.768 suites 2 2 n/a 0 0 00:04:43.768 tests 23 23 23 0 0 00:04:43.768 asserts 601 601 601 0 n/a 00:04:43.768 00:04:43.768 Elapsed time = 0.990 seconds 00:04:43.768 [2024-04-15 20:31:26.638745] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8359:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:04:43.768 [2024-04-15 20:31:26.638944] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:04:43.768 ************************************ 00:04:43.768 END TEST unittest_bdev 00:04:43.768 ************************************ 00:04:43.768 00:04:43.768 real 0m4.384s 00:04:43.768 user 0m1.609s 00:04:43.768 sys 0m2.769s 00:04:43.768 20:31:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.768 20:31:26 -- common/autotest_common.sh@10 -- # set +x 00:04:43.768 20:31:26 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:43.768 20:31:26 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:43.768 20:31:26 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:43.768 20:31:26 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:43.768 20:31:26 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:04:43.768 20:31:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.768 20:31:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.768 20:31:26 -- common/autotest_common.sh@10 -- # set +x 00:04:43.768 ************************************ 00:04:43.768 START TEST unittest_blob_blobfs 00:04:43.768 ************************************ 00:04:43.768 20:31:26 -- common/autotest_common.sh@1104 -- # unittest_blob 00:04:43.768 20:31:26 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:04:43.768 20:31:26 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:04:43.768 00:04:43.768 00:04:43.768 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.768 http://cunit.sourceforge.net/ 00:04:43.768 00:04:43.768 00:04:43.768 Suite: blob_nocopy_noextent 00:04:43.768 Test: blob_init ...[2024-04-15 20:31:26.764264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:04:43.768 passed 00:04:43.768 Test: blob_thin_provision ...passed 00:04:43.768 Test: blob_read_only ...passed 00:04:43.768 Test: bs_load ...[2024-04-15 20:31:26.806202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:04:43.768 passed 00:04:43.768 Test: bs_load_custom_cluster_size ...passed 00:04:43.768 Test: bs_load_after_failed_grow ...passed 00:04:43.768 Test: bs_cluster_sz ...[2024-04-15 20:31:26.821772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:04:43.768 [2024-04-15 20:31:26.822014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:04:43.768 [2024-04-15 20:31:26.822098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:04:43.768 passed 00:04:43.768 Test: bs_resize_md ...passed 00:04:43.768 Test: bs_destroy ...passed 00:04:43.768 Test: bs_type ...passed 00:04:43.768 Test: bs_super_block ...passed 00:04:43.768 Test: bs_test_recover_cluster_count ...passed 00:04:43.768 Test: bs_grow_live ...passed 00:04:43.768 Test: bs_grow_live_no_space ...passed 00:04:43.768 Test: bs_test_grow ...passed 00:04:43.768 Test: blob_serialize_test ...passed 00:04:43.768 Test: super_block_crc ...passed 00:04:43.768 Test: blob_thin_prov_write_count_io ...passed 00:04:43.768 Test: bs_load_iter_test ...passed 00:04:43.768 Test: blob_relations ...[2024-04-15 20:31:26.915525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:43.768 [2024-04-15 20:31:26.915615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.768 [2024-04-15 20:31:26.916214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:43.768 [2024-04-15 20:31:26.916265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.768 passed 00:04:43.768 Test: blob_relations2 ...[2024-04-15 20:31:26.925015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:43.768 [2024-04-15 20:31:26.925072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.768 [2024-04-15 20:31:26.925099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:43.768 [2024-04-15 20:31:26.925116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.768 [2024-04-15 20:31:26.925961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:43.768 [2024-04-15 20:31:26.926000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.768 passed 00:04:43.768 Test: blob_relations3 ...[2024-04-15 20:31:26.926257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:43.768 [2024-04-15 20:31:26.926289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.768 passed 00:04:43.768 Test: blobstore_clean_power_failure ...passed 00:04:43.768 Test: blob_delete_snapshot_power_failure ...[2024-04-15 20:31:27.018243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:04:43.768 [2024-04-15 20:31:27.026534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:43.768 [2024-04-15 20:31:27.026597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:43.769 [2024-04-15 20:31:27.026634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.769 [2024-04-15 20:31:27.035706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:04:43.769 [2024-04-15 20:31:27.035757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:04:43.769 [2024-04-15 20:31:27.035807] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:43.769 [2024-04-15 20:31:27.035829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.769 [2024-04-15 20:31:27.044076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:04:43.769 [2024-04-15 20:31:27.044178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.769 [2024-04-15 20:31:27.058101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:04:43.769 [2024-04-15 20:31:27.058323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.769 [2024-04-15 20:31:27.071468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:04:43.769 [2024-04-15 20:31:27.071583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:43.769 passed 00:04:43.769 Test: blob_create_snapshot_power_failure ...[2024-04-15 20:31:27.105375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:43.769 [2024-04-15 20:31:27.127535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:04:43.769 [2024-04-15 20:31:27.135462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:04:43.769 passed 00:04:43.769 Test: blob_io_unit ...passed 00:04:43.769 Test: blob_io_unit_compatibility ...passed 00:04:43.769 Test: blob_ext_md_pages ...passed 00:04:43.769 Test: blob_esnap_io_4096_4096 ...passed 00:04:43.769 Test: blob_esnap_io_512_512 ...passed 00:04:43.769 Test: blob_esnap_io_4096_512 ...passed 00:04:43.769 Test: blob_esnap_io_512_4096 ...passed 00:04:43.769 Suite: blob_bs_nocopy_noextent 00:04:44.027 Test: blob_open ...passed 00:04:44.027 Test: blob_create ...[2024-04-15 20:31:27.285901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:04:44.027 passed 00:04:44.027 Test: blob_create_loop ...passed 00:04:44.027 Test: blob_create_fail ...[2024-04-15 20:31:27.345054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:44.027 passed 00:04:44.027 Test: blob_create_internal ...passed 00:04:44.027 Test: blob_create_zero_extent ...passed 00:04:44.027 Test: blob_snapshot ...passed 00:04:44.027 Test: blob_clone ...passed 00:04:44.027 Test: blob_inflate ...[2024-04-15 20:31:27.492196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:04:44.027 passed 00:04:44.027 Test: blob_delete ...passed 00:04:44.285 Test: blob_resize_test ...[2024-04-15 20:31:27.530897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:04:44.286 passed 00:04:44.286 Test: channel_ops ...passed 00:04:44.286 Test: blob_super ...passed 00:04:44.286 Test: blob_rw_verify_iov ...passed 00:04:44.286 Test: blob_unmap ...passed 00:04:44.286 Test: blob_iter ...passed 00:04:44.286 Test: blob_parse_md ...passed 00:04:44.286 Test: bs_load_pending_removal ...passed 00:04:44.286 Test: bs_unload ...[2024-04-15 20:31:27.728105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:04:44.286 passed 00:04:44.286 Test: bs_usable_clusters ...passed 00:04:44.286 Test: blob_crc ...[2024-04-15 20:31:27.772524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:44.286 [2024-04-15 20:31:27.772970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:44.286 passed 00:04:44.544 Test: blob_flags ...passed 00:04:44.544 Test: bs_version ...passed 00:04:44.544 Test: blob_set_xattrs_test ...[2024-04-15 20:31:27.836567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:44.544 [2024-04-15 20:31:27.837104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:44.544 passed 00:04:44.544 Test: blob_thin_prov_alloc ...passed 00:04:44.544 Test: blob_insert_cluster_msg_test ...passed 00:04:44.544 Test: blob_thin_prov_rw ...passed 00:04:44.544 Test: blob_thin_prov_rle ...passed 00:04:44.544 Test: blob_thin_prov_rw_iov ...passed 00:04:44.544 Test: blob_snapshot_rw ...passed 00:04:44.803 Test: blob_snapshot_rw_iov ...passed 00:04:44.803 Test: blob_inflate_rw ...passed 00:04:44.803 Test: blob_snapshot_freeze_io ...passed 00:04:45.062 Test: blob_operation_split_rw ...passed 00:04:45.062 Test: blob_operation_split_rw_iov ...passed 00:04:45.062 Test: blob_simultaneous_operations ...[2024-04-15 20:31:28.429217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:45.062 [2024-04-15 20:31:28.429352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:45.062 [2024-04-15 20:31:28.432354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:45.062 [2024-04-15 20:31:28.432510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:45.062 [2024-04-15 20:31:28.456280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:45.062 [2024-04-15 20:31:28.456376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:45.062 [2024-04-15 20:31:28.456490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:45.062 [2024-04-15 20:31:28.456523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:45.062 passed 00:04:45.062 Test: blob_persist_test ...passed 00:04:45.321 Test: blob_decouple_snapshot ...passed 00:04:45.321 Test: blob_seek_io_unit ...passed 00:04:45.321 Test: blob_nested_freezes ...passed 00:04:45.321 Suite: blob_blob_nocopy_noextent 00:04:45.321 Test: blob_write ...passed 00:04:45.321 Test: blob_read ...passed 00:04:45.321 Test: blob_rw_verify ...passed 00:04:45.321 Test: blob_rw_verify_iov_nomem ...passed 00:04:45.321 Test: blob_rw_iov_read_only ...passed 00:04:45.321 Test: blob_xattr ...passed 00:04:45.321 Test: blob_dirty_shutdown ...passed 00:04:45.579 Test: blob_is_degraded ...passed 00:04:45.579 Suite: blob_esnap_bs_nocopy_noextent 00:04:45.579 Test: blob_esnap_create ...passed 00:04:45.579 Test: blob_esnap_thread_add_remove ...passed 00:04:45.579 Test: blob_esnap_clone_snapshot ...passed 00:04:45.579 Test: blob_esnap_clone_inflate ...passed 00:04:45.579 Test: blob_esnap_clone_decouple ...passed 00:04:45.579 Test: blob_esnap_clone_reload ...passed 00:04:45.579 Test: blob_esnap_hotplug ...passed 00:04:45.579 Suite: blob_nocopy_extent 00:04:45.579 Test: blob_init ...[2024-04-15 20:31:29.007528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:04:45.579 passed 00:04:45.579 Test: blob_thin_provision ...passed 00:04:45.579 Test: blob_read_only ...passed 00:04:45.579 Test: bs_load ...[2024-04-15 20:31:29.042312] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:04:45.579 passed 00:04:45.579 Test: bs_load_custom_cluster_size ...passed 00:04:45.579 Test: bs_load_after_failed_grow ...passed 00:04:45.579 Test: bs_cluster_sz ...[2024-04-15 20:31:29.060677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:04:45.579 [2024-04-15 20:31:29.060909] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:04:45.579 [2024-04-15 20:31:29.060964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:04:45.579 passed 00:04:45.838 Test: bs_resize_md ...passed 00:04:45.838 Test: bs_destroy ...passed 00:04:45.838 Test: bs_type ...passed 00:04:45.838 Test: bs_super_block ...passed 00:04:45.838 Test: bs_test_recover_cluster_count ...passed 00:04:45.838 Test: bs_grow_live ...passed 00:04:45.838 Test: bs_grow_live_no_space ...passed 00:04:45.838 Test: bs_test_grow ...passed 00:04:45.839 Test: blob_serialize_test ...passed 00:04:45.839 Test: super_block_crc ...passed 00:04:45.839 Test: blob_thin_prov_write_count_io ...passed 00:04:45.839 Test: bs_load_iter_test ...passed 00:04:45.839 Test: blob_relations ...[2024-04-15 20:31:29.185865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:45.839 [2024-04-15 20:31:29.185980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:45.839 [2024-04-15 20:31:29.186832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:45.839 [2024-04-15 20:31:29.186881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:45.839 passed 00:04:45.839 Test: blob_relations2 ...[2024-04-15 20:31:29.197778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:45.839 [2024-04-15 20:31:29.197838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:45.839 [2024-04-15 20:31:29.197888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:45.839 [2024-04-15 20:31:29.197925] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:45.839 [2024-04-15 20:31:29.199454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:45.839 [2024-04-15 20:31:29.199512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:45.839 [2024-04-15 20:31:29.200041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:45.839 [2024-04-15 20:31:29.200095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:45.839 passed 00:04:45.839 Test: blob_relations3 ...passed 00:04:45.839 Test: blobstore_clean_power_failure ...passed 00:04:45.839 Test: blob_delete_snapshot_power_failure ...[2024-04-15 20:31:29.309509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:04:45.839 [2024-04-15 20:31:29.322490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:04:45.839 [2024-04-15 20:31:29.333328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:45.839 [2024-04-15 20:31:29.333418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:45.839 [2024-04-15 20:31:29.333451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:46.098 [2024-04-15 20:31:29.343018] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:04:46.098 [2024-04-15 20:31:29.343092] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:04:46.098 [2024-04-15 20:31:29.343130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:46.098 [2024-04-15 20:31:29.343158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:46.098 [2024-04-15 20:31:29.351978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:04:46.098 [2024-04-15 20:31:29.352045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:04:46.098 [2024-04-15 20:31:29.352081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:46.098 [2024-04-15 20:31:29.352128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:46.098 [2024-04-15 20:31:29.360978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:04:46.098 [2024-04-15 20:31:29.361051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:46.098 [2024-04-15 20:31:29.369710] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:04:46.098 [2024-04-15 20:31:29.369815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:46.098 [2024-04-15 20:31:29.379455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:04:46.098 [2024-04-15 20:31:29.379529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:46.098 passed 00:04:46.098 Test: blob_create_snapshot_power_failure ...[2024-04-15 20:31:29.405144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:46.098 [2024-04-15 20:31:29.413158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:04:46.098 [2024-04-15 20:31:29.436511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:04:46.098 [2024-04-15 20:31:29.446075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:04:46.098 passed 00:04:46.098 Test: blob_io_unit ...passed 00:04:46.098 Test: blob_io_unit_compatibility ...passed 00:04:46.098 Test: blob_ext_md_pages ...passed 00:04:46.098 Test: blob_esnap_io_4096_4096 ...passed 00:04:46.098 Test: blob_esnap_io_512_512 ...passed 00:04:46.098 Test: blob_esnap_io_4096_512 ...passed 00:04:46.098 Test: blob_esnap_io_512_4096 ...passed 00:04:46.098 Suite: blob_bs_nocopy_extent 00:04:46.098 Test: blob_open ...passed 00:04:46.357 Test: blob_create ...[2024-04-15 20:31:29.602234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:04:46.357 passed 00:04:46.357 Test: blob_create_loop ...passed 00:04:46.357 Test: blob_create_fail ...[2024-04-15 20:31:29.679619] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:46.357 passed 00:04:46.357 Test: blob_create_internal ...passed 00:04:46.357 Test: blob_create_zero_extent ...passed 00:04:46.357 Test: blob_snapshot ...passed 00:04:46.357 Test: blob_clone ...passed 00:04:46.357 Test: blob_inflate ...[2024-04-15 20:31:29.829513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:04:46.357 passed 00:04:46.615 Test: blob_delete ...passed 00:04:46.615 Test: blob_resize_test ...[2024-04-15 20:31:29.876852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:04:46.615 passed 00:04:46.615 Test: channel_ops ...passed 00:04:46.615 Test: blob_super ...passed 00:04:46.615 Test: blob_rw_verify_iov ...passed 00:04:46.615 Test: blob_unmap ...passed 00:04:46.615 Test: blob_iter ...passed 00:04:46.615 Test: blob_parse_md ...passed 00:04:46.615 Test: bs_load_pending_removal ...passed 00:04:46.615 Test: bs_unload ...[2024-04-15 20:31:30.062609] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:04:46.615 passed 00:04:46.615 Test: bs_usable_clusters ...passed 00:04:46.615 Test: blob_crc ...[2024-04-15 20:31:30.106848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:46.615 [2024-04-15 20:31:30.107014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:46.615 passed 00:04:46.873 Test: blob_flags ...passed 00:04:46.873 Test: bs_version ...passed 00:04:46.873 Test: blob_set_xattrs_test ...[2024-04-15 20:31:30.165880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:46.873 [2024-04-15 20:31:30.165978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:46.873 passed 00:04:46.873 Test: blob_thin_prov_alloc ...passed 00:04:46.873 Test: blob_insert_cluster_msg_test ...passed 00:04:46.873 Test: blob_thin_prov_rw ...passed 00:04:46.873 Test: blob_thin_prov_rle ...passed 00:04:46.873 Test: blob_thin_prov_rw_iov ...passed 00:04:46.873 Test: blob_snapshot_rw ...passed 00:04:46.873 Test: blob_snapshot_rw_iov ...passed 00:04:47.132 Test: blob_inflate_rw ...passed 00:04:47.132 Test: blob_snapshot_freeze_io ...passed 00:04:47.132 Test: blob_operation_split_rw ...passed 00:04:47.390 Test: blob_operation_split_rw_iov ...passed 00:04:47.391 Test: blob_simultaneous_operations ...[2024-04-15 20:31:30.729162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:47.391 [2024-04-15 20:31:30.729233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:47.391 [2024-04-15 20:31:30.730387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:47.391 [2024-04-15 20:31:30.730429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:47.391 [2024-04-15 20:31:30.742056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:47.391 [2024-04-15 20:31:30.742131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:47.391 [2024-04-15 20:31:30.742209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:47.391 [2024-04-15 20:31:30.742225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:47.391 passed 00:04:47.391 Test: blob_persist_test ...passed 00:04:47.391 Test: blob_decouple_snapshot ...passed 00:04:47.391 Test: blob_seek_io_unit ...passed 00:04:47.391 Test: blob_nested_freezes ...passed 00:04:47.391 Suite: blob_blob_nocopy_extent 00:04:47.650 Test: blob_write ...passed 00:04:47.650 Test: blob_read ...passed 00:04:47.650 Test: blob_rw_verify ...passed 00:04:47.650 Test: blob_rw_verify_iov_nomem ...passed 00:04:47.650 Test: blob_rw_iov_read_only ...passed 00:04:47.650 Test: blob_xattr ...passed 00:04:47.650 Test: blob_dirty_shutdown ...passed 00:04:47.650 Test: blob_is_degraded ...passed 00:04:47.650 Suite: blob_esnap_bs_nocopy_extent 00:04:47.650 Test: blob_esnap_create ...passed 00:04:47.650 Test: blob_esnap_thread_add_remove ...passed 00:04:47.909 Test: blob_esnap_clone_snapshot ...passed 00:04:47.909 Test: blob_esnap_clone_inflate ...passed 00:04:47.909 Test: blob_esnap_clone_decouple ...passed 00:04:47.909 Test: blob_esnap_clone_reload ...passed 00:04:47.909 Test: blob_esnap_hotplug ...passed 00:04:47.909 Suite: blob_copy_noextent 00:04:47.909 Test: blob_init ...[2024-04-15 20:31:31.246817] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:04:47.909 passed 00:04:47.909 Test: blob_thin_provision ...passed 00:04:47.909 Test: blob_read_only ...passed 00:04:47.909 Test: bs_load ...[2024-04-15 20:31:31.281010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:04:47.909 passed 00:04:47.909 Test: bs_load_custom_cluster_size ...passed 00:04:47.909 Test: bs_load_after_failed_grow ...passed 00:04:47.909 Test: bs_cluster_sz ...[2024-04-15 20:31:31.296302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:04:47.909 [2024-04-15 20:31:31.296423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:04:47.909 [2024-04-15 20:31:31.296453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:04:47.909 passed 00:04:47.909 Test: bs_resize_md ...passed 00:04:47.909 Test: bs_destroy ...passed 00:04:47.909 Test: bs_type ...passed 00:04:47.909 Test: bs_super_block ...passed 00:04:47.909 Test: bs_test_recover_cluster_count ...passed 00:04:47.909 Test: bs_grow_live ...passed 00:04:47.909 Test: bs_grow_live_no_space ...passed 00:04:47.909 Test: bs_test_grow ...passed 00:04:47.909 Test: blob_serialize_test ...passed 00:04:47.909 Test: super_block_crc ...passed 00:04:47.909 Test: blob_thin_prov_write_count_io ...passed 00:04:47.909 Test: bs_load_iter_test ...passed 00:04:47.909 Test: blob_relations ...[2024-04-15 20:31:31.391141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:47.909 [2024-04-15 20:31:31.391207] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:47.909 [2024-04-15 20:31:31.391544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:47.909 [2024-04-15 20:31:31.391565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:47.909 passed 00:04:47.909 Test: blob_relations2 ...[2024-04-15 20:31:31.400506] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:47.909 [2024-04-15 20:31:31.400559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:47.909 [2024-04-15 20:31:31.400579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:47.909 [2024-04-15 20:31:31.400591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:47.909 [2024-04-15 20:31:31.401246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:47.909 [2024-04-15 20:31:31.401281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:47.909 [2024-04-15 20:31:31.401470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:47.909 [2024-04-15 20:31:31.401493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:47.909 passed 00:04:48.169 Test: blob_relations3 ...passed 00:04:48.169 Test: blobstore_clean_power_failure ...passed 00:04:48.169 Test: blob_delete_snapshot_power_failure ...[2024-04-15 20:31:31.490627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:04:48.169 [2024-04-15 20:31:31.497907] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:48.169 [2024-04-15 20:31:31.497970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:48.169 [2024-04-15 20:31:31.498003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:48.169 [2024-04-15 20:31:31.505201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:04:48.169 [2024-04-15 20:31:31.505260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:04:48.169 [2024-04-15 20:31:31.505281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:48.169 [2024-04-15 20:31:31.505298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:48.169 [2024-04-15 20:31:31.512567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:04:48.169 [2024-04-15 20:31:31.512638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:48.169 [2024-04-15 20:31:31.525826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:04:48.169 [2024-04-15 20:31:31.526024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:48.169 [2024-04-15 20:31:31.537492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:04:48.169 [2024-04-15 20:31:31.537596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:48.169 passed 00:04:48.169 Test: blob_create_snapshot_power_failure ...[2024-04-15 20:31:31.563081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:48.169 [2024-04-15 20:31:31.582758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:04:48.169 [2024-04-15 20:31:31.590549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:04:48.169 passed 00:04:48.169 Test: blob_io_unit ...passed 00:04:48.169 Test: blob_io_unit_compatibility ...passed 00:04:48.169 Test: blob_ext_md_pages ...passed 00:04:48.169 Test: blob_esnap_io_4096_4096 ...passed 00:04:48.429 Test: blob_esnap_io_512_512 ...passed 00:04:48.429 Test: blob_esnap_io_4096_512 ...passed 00:04:48.429 Test: blob_esnap_io_512_4096 ...passed 00:04:48.429 Suite: blob_bs_copy_noextent 00:04:48.429 Test: blob_open ...passed 00:04:48.429 Test: blob_create ...[2024-04-15 20:31:31.741932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:04:48.429 passed 00:04:48.429 Test: blob_create_loop ...passed 00:04:48.429 Test: blob_create_fail ...[2024-04-15 20:31:31.812205] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:48.429 passed 00:04:48.429 Test: blob_create_internal ...passed 00:04:48.429 Test: blob_create_zero_extent ...passed 00:04:48.429 Test: blob_snapshot ...passed 00:04:48.429 Test: blob_clone ...passed 00:04:48.688 Test: blob_inflate ...[2024-04-15 20:31:31.939427] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:04:48.688 passed 00:04:48.688 Test: blob_delete ...passed 00:04:48.688 Test: blob_resize_test ...[2024-04-15 20:31:31.982271] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:04:48.688 passed 00:04:48.688 Test: channel_ops ...passed 00:04:48.688 Test: blob_super ...passed 00:04:48.688 Test: blob_rw_verify_iov ...passed 00:04:48.688 Test: blob_unmap ...passed 00:04:48.688 Test: blob_iter ...passed 00:04:48.688 Test: blob_parse_md ...passed 00:04:48.688 Test: bs_load_pending_removal ...passed 00:04:48.688 Test: bs_unload ...[2024-04-15 20:31:32.143621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:04:48.688 passed 00:04:48.688 Test: bs_usable_clusters ...passed 00:04:48.946 Test: blob_crc ...[2024-04-15 20:31:32.188935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:48.946 [2024-04-15 20:31:32.189085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:48.946 passed 00:04:48.946 Test: blob_flags ...passed 00:04:48.946 Test: bs_version ...passed 00:04:48.946 Test: blob_set_xattrs_test ...[2024-04-15 20:31:32.262277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:48.946 [2024-04-15 20:31:32.262366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:48.946 passed 00:04:48.946 Test: blob_thin_prov_alloc ...passed 00:04:48.946 Test: blob_insert_cluster_msg_test ...passed 00:04:48.946 Test: blob_thin_prov_rw ...passed 00:04:48.946 Test: blob_thin_prov_rle ...passed 00:04:48.946 Test: blob_thin_prov_rw_iov ...passed 00:04:49.206 Test: blob_snapshot_rw ...passed 00:04:49.206 Test: blob_snapshot_rw_iov ...passed 00:04:49.206 Test: blob_inflate_rw ...passed 00:04:49.206 Test: blob_snapshot_freeze_io ...passed 00:04:49.468 Test: blob_operation_split_rw ...passed 00:04:49.468 Test: blob_operation_split_rw_iov ...passed 00:04:49.468 Test: blob_simultaneous_operations ...[2024-04-15 20:31:32.867893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:49.468 [2024-04-15 20:31:32.867967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:49.468 [2024-04-15 20:31:32.868311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:49.468 [2024-04-15 20:31:32.868343] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:49.468 [2024-04-15 20:31:32.873064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:49.468 [2024-04-15 20:31:32.873182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:49.468 [2024-04-15 20:31:32.873344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:49.468 [2024-04-15 20:31:32.873386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:49.468 passed 00:04:49.468 Test: blob_persist_test ...passed 00:04:49.468 Test: blob_decouple_snapshot ...passed 00:04:49.727 Test: blob_seek_io_unit ...passed 00:04:49.727 Test: blob_nested_freezes ...passed 00:04:49.727 Suite: blob_blob_copy_noextent 00:04:49.727 Test: blob_write ...passed 00:04:49.727 Test: blob_read ...passed 00:04:49.727 Test: blob_rw_verify ...passed 00:04:49.727 Test: blob_rw_verify_iov_nomem ...passed 00:04:49.727 Test: blob_rw_iov_read_only ...passed 00:04:49.727 Test: blob_xattr ...passed 00:04:49.727 Test: blob_dirty_shutdown ...passed 00:04:49.727 Test: blob_is_degraded ...passed 00:04:49.727 Suite: blob_esnap_bs_copy_noextent 00:04:49.727 Test: blob_esnap_create ...passed 00:04:49.986 Test: blob_esnap_thread_add_remove ...passed 00:04:49.986 Test: blob_esnap_clone_snapshot ...passed 00:04:49.986 Test: blob_esnap_clone_inflate ...passed 00:04:49.986 Test: blob_esnap_clone_decouple ...passed 00:04:49.986 Test: blob_esnap_clone_reload ...passed 00:04:49.986 Test: blob_esnap_hotplug ...passed 00:04:49.986 Suite: blob_copy_extent 00:04:49.986 Test: blob_init ...[2024-04-15 20:31:33.368052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:04:49.986 passed 00:04:49.986 Test: blob_thin_provision ...passed 00:04:49.986 Test: blob_read_only ...passed 00:04:49.986 Test: bs_load ...[2024-04-15 20:31:33.398420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:04:49.986 passed 00:04:49.986 Test: bs_load_custom_cluster_size ...passed 00:04:49.986 Test: bs_load_after_failed_grow ...passed 00:04:49.986 Test: bs_cluster_sz ...[2024-04-15 20:31:33.420181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:04:49.986 [2024-04-15 20:31:33.420311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:04:49.986 [2024-04-15 20:31:33.420360] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:04:49.986 passed 00:04:49.986 Test: bs_resize_md ...passed 00:04:49.986 Test: bs_destroy ...passed 00:04:49.986 Test: bs_type ...passed 00:04:49.986 Test: bs_super_block ...passed 00:04:49.986 Test: bs_test_recover_cluster_count ...passed 00:04:49.986 Test: bs_grow_live ...passed 00:04:49.986 Test: bs_grow_live_no_space ...passed 00:04:49.986 Test: bs_test_grow ...passed 00:04:50.246 Test: blob_serialize_test ...passed 00:04:50.246 Test: super_block_crc ...passed 00:04:50.246 Test: blob_thin_prov_write_count_io ...passed 00:04:50.246 Test: bs_load_iter_test ...passed 00:04:50.246 Test: blob_relations ...[2024-04-15 20:31:33.513797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:50.246 [2024-04-15 20:31:33.513872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.246 [2024-04-15 20:31:33.514468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:50.246 [2024-04-15 20:31:33.514501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.246 passed 00:04:50.246 Test: blob_relations2 ...[2024-04-15 20:31:33.523189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:50.246 [2024-04-15 20:31:33.523250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.246 [2024-04-15 20:31:33.523296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:50.246 [2024-04-15 20:31:33.523318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.246 [2024-04-15 20:31:33.524180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:50.246 [2024-04-15 20:31:33.524215] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.246 [2024-04-15 20:31:33.524507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:50.246 [2024-04-15 20:31:33.524536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.246 passed 00:04:50.246 Test: blob_relations3 ...passed 00:04:50.246 Test: blobstore_clean_power_failure ...passed 00:04:50.247 Test: blob_delete_snapshot_power_failure ...[2024-04-15 20:31:33.619544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:04:50.247 [2024-04-15 20:31:33.626995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:04:50.247 [2024-04-15 20:31:33.640048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:50.247 [2024-04-15 20:31:33.640171] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:50.247 [2024-04-15 20:31:33.640216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.247 [2024-04-15 20:31:33.654302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:04:50.247 [2024-04-15 20:31:33.654367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:04:50.247 [2024-04-15 20:31:33.654391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:50.247 [2024-04-15 20:31:33.654414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.247 [2024-04-15 20:31:33.662718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:04:50.247 [2024-04-15 20:31:33.662766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:04:50.247 [2024-04-15 20:31:33.662783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:50.247 [2024-04-15 20:31:33.662801] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.247 [2024-04-15 20:31:33.670359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:04:50.247 [2024-04-15 20:31:33.670422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.247 [2024-04-15 20:31:33.677970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:04:50.247 [2024-04-15 20:31:33.678033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.247 [2024-04-15 20:31:33.685522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:04:50.247 [2024-04-15 20:31:33.685580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:50.247 passed 00:04:50.247 Test: blob_create_snapshot_power_failure ...[2024-04-15 20:31:33.716058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:50.247 [2024-04-15 20:31:33.723196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:04:50.247 [2024-04-15 20:31:33.736347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:04:50.506 [2024-04-15 20:31:33.748361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:04:50.506 passed 00:04:50.506 Test: blob_io_unit ...passed 00:04:50.506 Test: blob_io_unit_compatibility ...passed 00:04:50.506 Test: blob_ext_md_pages ...passed 00:04:50.506 Test: blob_esnap_io_4096_4096 ...passed 00:04:50.506 Test: blob_esnap_io_512_512 ...passed 00:04:50.506 Test: blob_esnap_io_4096_512 ...passed 00:04:50.506 Test: blob_esnap_io_512_4096 ...passed 00:04:50.506 Suite: blob_bs_copy_extent 00:04:50.506 Test: blob_open ...passed 00:04:50.506 Test: blob_create ...[2024-04-15 20:31:33.895703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:04:50.506 passed 00:04:50.506 Test: blob_create_loop ...passed 00:04:50.506 Test: blob_create_fail ...[2024-04-15 20:31:33.965174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:50.506 passed 00:04:50.506 Test: blob_create_internal ...passed 00:04:50.766 Test: blob_create_zero_extent ...passed 00:04:50.766 Test: blob_snapshot ...passed 00:04:50.766 Test: blob_clone ...passed 00:04:50.766 Test: blob_inflate ...[2024-04-15 20:31:34.081738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:04:50.766 passed 00:04:50.766 Test: blob_delete ...passed 00:04:50.766 Test: blob_resize_test ...[2024-04-15 20:31:34.122157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:04:50.766 passed 00:04:50.766 Test: channel_ops ...passed 00:04:50.766 Test: blob_super ...passed 00:04:50.766 Test: blob_rw_verify_iov ...passed 00:04:50.766 Test: blob_unmap ...passed 00:04:50.766 Test: blob_iter ...passed 00:04:51.025 Test: blob_parse_md ...passed 00:04:51.025 Test: bs_load_pending_removal ...passed 00:04:51.025 Test: bs_unload ...[2024-04-15 20:31:34.308860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:04:51.025 passed 00:04:51.025 Test: bs_usable_clusters ...passed 00:04:51.025 Test: blob_crc ...[2024-04-15 20:31:34.357771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:51.025 [2024-04-15 20:31:34.357863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:51.025 passed 00:04:51.025 Test: blob_flags ...passed 00:04:51.025 Test: bs_version ...passed 00:04:51.025 Test: blob_set_xattrs_test ...[2024-04-15 20:31:34.431771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:51.026 [2024-04-15 20:31:34.431862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:51.026 passed 00:04:51.026 Test: blob_thin_prov_alloc ...passed 00:04:51.026 Test: blob_insert_cluster_msg_test ...passed 00:04:51.285 Test: blob_thin_prov_rw ...passed 00:04:51.285 Test: blob_thin_prov_rle ...passed 00:04:51.285 Test: blob_thin_prov_rw_iov ...passed 00:04:51.285 Test: blob_snapshot_rw ...passed 00:04:51.285 Test: blob_snapshot_rw_iov ...passed 00:04:51.285 Test: blob_inflate_rw ...passed 00:04:51.285 Test: blob_snapshot_freeze_io ...passed 00:04:51.545 Test: blob_operation_split_rw ...passed 00:04:51.545 Test: blob_operation_split_rw_iov ...passed 00:04:51.545 Test: blob_simultaneous_operations ...[2024-04-15 20:31:34.984874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:51.545 [2024-04-15 20:31:34.984956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:51.545 [2024-04-15 20:31:34.985292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:51.545 [2024-04-15 20:31:34.985314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:51.545 [2024-04-15 20:31:34.988492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:51.545 [2024-04-15 20:31:34.988528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:51.545 [2024-04-15 20:31:34.988594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:51.545 [2024-04-15 20:31:34.988611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:51.545 passed 00:04:51.545 Test: blob_persist_test ...passed 00:04:51.804 Test: blob_decouple_snapshot ...passed 00:04:51.804 Test: blob_seek_io_unit ...passed 00:04:51.804 Test: blob_nested_freezes ...passed 00:04:51.804 Suite: blob_blob_copy_extent 00:04:51.804 Test: blob_write ...passed 00:04:51.804 Test: blob_read ...passed 00:04:51.804 Test: blob_rw_verify ...passed 00:04:51.804 Test: blob_rw_verify_iov_nomem ...passed 00:04:51.804 Test: blob_rw_iov_read_only ...passed 00:04:51.804 Test: blob_xattr ...passed 00:04:51.804 Test: blob_dirty_shutdown ...passed 00:04:51.804 Test: blob_is_degraded ...passed 00:04:51.804 Suite: blob_esnap_bs_copy_extent 00:04:52.063 Test: blob_esnap_create ...passed 00:04:52.063 Test: blob_esnap_thread_add_remove ...passed 00:04:52.063 Test: blob_esnap_clone_snapshot ...passed 00:04:52.063 Test: blob_esnap_clone_inflate ...passed 00:04:52.063 Test: blob_esnap_clone_decouple ...passed 00:04:52.063 Test: blob_esnap_clone_reload ...passed 00:04:52.063 Test: blob_esnap_hotplug ...passed 00:04:52.063 00:04:52.063 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.063 suites 16 16 n/a 0 0 00:04:52.063 tests 348 348 348 0 0 00:04:52.063 asserts 92605 92605 92605 0 n/a 00:04:52.063 00:04:52.063 Elapsed time = 8.640 seconds 00:04:52.063 20:31:35 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:04:52.063 00:04:52.063 00:04:52.063 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.063 http://cunit.sourceforge.net/ 00:04:52.063 00:04:52.063 00:04:52.063 Suite: blob_bdev 00:04:52.063 Test: create_bs_dev ...passed 00:04:52.063 Test: create_bs_dev_ro ...passed 00:04:52.063 Test: create_bs_dev_rw ...passed 00:04:52.063 Test: claim_bs_dev ...passed 00:04:52.063 Test: claim_bs_dev_ro ...passed 00:04:52.063 Test: deferred_destroy_refs ...passed 00:04:52.063 Test: deferred_destroy_channels ...passed 00:04:52.063 Test: deferred_destroy_threads ...passed 00:04:52.063 00:04:52.063 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.063 suites 1 1 n/a 0 0 00:04:52.063 tests 8 8 8 0 0 00:04:52.063 asserts 119 119 119 0 n/a 00:04:52.063 00:04:52.063 Elapsed time = 0.000 seconds 00:04:52.063 [2024-04-15 20:31:35.546468] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:04:52.063 [2024-04-15 20:31:35.546769] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:04:52.323 20:31:35 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:04:52.323 00:04:52.323 00:04:52.323 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.323 http://cunit.sourceforge.net/ 00:04:52.323 00:04:52.323 00:04:52.323 Suite: tree 00:04:52.323 Test: blobfs_tree_op_test ...passed 00:04:52.323 00:04:52.323 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.323 suites 1 1 n/a 0 0 00:04:52.323 tests 1 1 1 0 0 00:04:52.323 asserts 27 27 27 0 n/a 00:04:52.323 00:04:52.323 Elapsed time = 0.010 seconds 00:04:52.323 20:31:35 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:04:52.323 00:04:52.323 00:04:52.323 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.323 http://cunit.sourceforge.net/ 00:04:52.323 00:04:52.323 00:04:52.323 Suite: blobfs_async_ut 00:04:52.323 Test: fs_init ...passed 00:04:52.323 Test: fs_open ...passed 00:04:52.323 Test: fs_create ...passed 00:04:52.323 Test: fs_truncate ...passed 00:04:52.323 Test: fs_rename ...[2024-04-15 20:31:35.693326] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:04:52.323 passed 00:04:52.323 Test: fs_rw_async ...passed 00:04:52.323 Test: fs_writev_readv_async ...passed 00:04:52.323 Test: tree_find_buffer_ut ...passed 00:04:52.323 Test: channel_ops ...passed 00:04:52.323 Test: channel_ops_sync ...passed 00:04:52.323 00:04:52.323 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.323 suites 1 1 n/a 0 0 00:04:52.323 tests 10 10 10 0 0 00:04:52.323 asserts 292 292 292 0 n/a 00:04:52.323 00:04:52.323 Elapsed time = 0.100 seconds 00:04:52.323 20:31:35 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:04:52.323 00:04:52.323 00:04:52.323 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.323 http://cunit.sourceforge.net/ 00:04:52.323 00:04:52.323 00:04:52.323 Suite: blobfs_sync_ut 00:04:52.323 Test: cache_read_after_write ...passed 00:04:52.323 Test: file_length ...[2024-04-15 20:31:35.804916] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:04:52.323 passed 00:04:52.598 Test: append_write_to_extend_blob ...passed 00:04:52.598 Test: partial_buffer ...passed 00:04:52.598 Test: cache_write_null_buffer ...passed 00:04:52.598 Test: fs_create_sync ...passed 00:04:52.598 Test: fs_rename_sync ...passed 00:04:52.598 Test: cache_append_no_cache ...passed 00:04:52.598 Test: fs_delete_file_without_close ...passed 00:04:52.598 00:04:52.598 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.598 suites 1 1 n/a 0 0 00:04:52.598 tests 9 9 9 0 0 00:04:52.598 asserts 345 345 345 0 n/a 00:04:52.598 00:04:52.598 Elapsed time = 0.190 seconds 00:04:52.598 20:31:35 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:04:52.598 00:04:52.598 00:04:52.598 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.598 http://cunit.sourceforge.net/ 00:04:52.598 00:04:52.598 00:04:52.598 Suite: blobfs_bdev_ut 00:04:52.598 Test: spdk_blobfs_bdev_detect_test ...passed 00:04:52.598 Test: spdk_blobfs_bdev_create_test ...passed 00:04:52.598 Test: spdk_blobfs_bdev_mount_test ...passed 00:04:52.598 00:04:52.598 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.598 suites 1 1 n/a 0 0 00:04:52.598 tests 3 3 3 0 0 00:04:52.598 asserts 9 9 9 0 n/a 00:04:52.598 00:04:52.598 Elapsed time = 0.000 seconds 00:04:52.598 [2024-04-15 20:31:35.924758] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:04:52.598 [2024-04-15 20:31:35.925020] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:04:52.598 ************************************ 00:04:52.598 END TEST unittest_blob_blobfs 00:04:52.598 ************************************ 00:04:52.598 00:04:52.598 real 0m9.202s 00:04:52.598 user 0m8.674s 00:04:52.598 sys 0m0.590s 00:04:52.598 20:31:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.598 20:31:35 -- common/autotest_common.sh@10 -- # set +x 00:04:52.598 20:31:35 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:04:52.598 20:31:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.598 20:31:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.598 20:31:35 -- common/autotest_common.sh@10 -- # set +x 00:04:52.598 ************************************ 00:04:52.598 START TEST unittest_event 00:04:52.598 ************************************ 00:04:52.598 20:31:35 -- common/autotest_common.sh@1104 -- # unittest_event 00:04:52.598 20:31:35 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:04:52.598 00:04:52.598 00:04:52.598 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.598 http://cunit.sourceforge.net/ 00:04:52.598 00:04:52.598 00:04:52.598 Suite: app_suite 00:04:52.598 Test: test_spdk_app_parse_args ...app_ut [options] 00:04:52.598 options: 00:04:52.598 -c, --config JSON config file (default none) 00:04:52.598 --json JSON config file (default none) 00:04:52.598 --json-ignore-init-errors 00:04:52.598 don't exit on invalid config entry 00:04:52.598 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:04:52.598 -g, --single-file-segments 00:04:52.598 force creating just one hugetlbfs file 00:04:52.598 -h, --help show this usage 00:04:52.598 -i, --shm-id shared memory ID (optional) 00:04:52.598 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:04:52.598 --lcores lcore to CPU mapping list. The list is in the format: 00:04:52.598 [<,lcores[@CPUs]>...] 00:04:52.598 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:04:52.598 Within the group, '-' is used for range separator, 00:04:52.598 ',' is used for single number separator. 00:04:52.598 '( )' can be omitted for single element group, 00:04:52.598 '@' can be omitted if cpus and lcores have the same value 00:04:52.598 -n, --mem-channels channel number of memory channels used for DPDK 00:04:52.598 -p, --main-core main (primary) core for DPDK 00:04:52.598 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:04:52.598 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:04:52.599 --disable-cpumask-locks Disable CPU core lock files. 00:04:52.599 --silence-noticelog disable notice level logging to stderr 00:04:52.599 --msg-mempool-size global message memory pool size in count (default: 262143) 00:04:52.599 -u, --no-pci disable PCI access 00:04:52.599 --wait-for-rpc wait for RPCs to initialize subsystems 00:04:52.599 --max-delay maximum reactor delay (in microseconds) 00:04:52.599 -B, --pci-blocked pci addr to block (can be used more than once) 00:04:52.599 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:04:52.599 -R, --huge-unlink unlink huge files after initialization 00:04:52.599 -v, --version print SPDK version 00:04:52.599 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:04:52.599 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:04:52.599 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:04:52.599 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:04:52.599 Tracepoints vary in size and can use more than one trace entry. 00:04:52.599 --rpcs-allowed comma-separated list of permitted RPCS 00:04:52.599 --env-context Opaque context for use of the env implementation 00:04:52.599 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:04:52.599 --no-huge run without using hugepages 00:04:52.599 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:04:52.599 -e, --tpoint-group [:] 00:04:52.599 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:04:52.599 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:04:52.599 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:04:52.599 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:04:52.599 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:04:52.599 app_ut [options] 00:04:52.599 options: 00:04:52.599 -c, --config JSON config file (default none) 00:04:52.599 --json JSON config file (default none) 00:04:52.599 --json-ignore-init-errors 00:04:52.599 don't exit on invalid config entry 00:04:52.599 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:04:52.599 -g, --single-file-segments 00:04:52.599 force creating just one hugetlbfs file 00:04:52.599 -h, --help show this usage 00:04:52.599 -i, --shm-id shared memory ID (optional) 00:04:52.599 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:04:52.599 --lcores lcore to CPU mapping list. The list is in the format: 00:04:52.599 [<,lcores[@CPUs]>...] 00:04:52.599 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:04:52.599 Within the group, '-' is used for range separator, 00:04:52.599 ',' is used for single number separator. 00:04:52.599 '( )' can be omitted for single element group, 00:04:52.599 '@' can be omitted if cpus and lcores have the same value 00:04:52.599 -n, --mem-channels channel number of memory channels used for DPDK 00:04:52.599 -p, --main-core main (primary) core for DPDK 00:04:52.599 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:04:52.599 app_ut: invalid option -- 'z' 00:04:52.599 app_ut: unrecognized option '--test-long-opt' 00:04:52.599 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:04:52.599 --disable-cpumask-locks Disable CPU core lock files. 00:04:52.599 --silence-noticelog disable notice level logging to stderr 00:04:52.599 --msg-mempool-size global message memory pool size in count (default: 262143) 00:04:52.599 -u, --no-pci disable PCI access 00:04:52.599 --wait-for-rpc wait for RPCs to initialize subsystems 00:04:52.599 --max-delay maximum reactor delay (in microseconds) 00:04:52.599 -B, --pci-blocked pci addr to block (can be used more than once) 00:04:52.599 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:04:52.599 -R, --huge-unlink unlink huge files after initialization 00:04:52.599 -v, --version print SPDK version 00:04:52.599 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:04:52.599 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:04:52.599 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:04:52.599 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:04:52.599 Tracepoints vary in size and can use more than one trace entry. 00:04:52.599 --rpcs-allowed comma-separated list of permitted RPCS 00:04:52.599 --env-context Opaque context for use of the env implementation 00:04:52.599 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:04:52.599 --no-huge run without using hugepages 00:04:52.599 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:04:52.599 -e, --tpoint-group [:] 00:04:52.599 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:04:52.599 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:04:52.599 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:04:52.599 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:04:52.599 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:04:52.599 app_ut [options] 00:04:52.599 options: 00:04:52.599 -c, --config JSON config file (default none) 00:04:52.599 --json JSON config file (default none) 00:04:52.599 --json-ignore-init-errors 00:04:52.599 don't exit on invalid config entry 00:04:52.599 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:04:52.599 -g, --single-file-segments 00:04:52.599 force creating just one hugetlbfs file 00:04:52.599 -h, --help show this usage 00:04:52.599 -i, --shm-id shared memory ID (optional) 00:04:52.599 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:04:52.599 --lcores lcore to CPU mapping list. The list is in the format: 00:04:52.599 [<,lcores[@CPUs]>...] 00:04:52.600 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:04:52.600 Within the group, '-' is used for range separator, 00:04:52.600 ',' is used for single number separator. 00:04:52.600 '( )' can be omitted for single element group, 00:04:52.600 '@' can be omitted if cpus and lcores have the same value 00:04:52.600 -n, --mem-channels channel number of memory channels used for DPDK 00:04:52.600 -p, --main-core main (primary) core for DPDK 00:04:52.600 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:04:52.600 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:04:52.600 --disable-cpumask-locks Disable CPU core lock files. 00:04:52.600 --silence-noticelog disable notice level logging to stderr 00:04:52.600 --msg-mempool-size global message memory pool size in count (default: 262143) 00:04:52.600 -u, --no-pci disable PCI access 00:04:52.600 --wait-for-rpc wait for RPCs to initialize subsystems 00:04:52.600 --max-delay maximum reactor delay (in microseconds) 00:04:52.600 -B, --pci-blocked pci addr to block (can be used more than once) 00:04:52.600 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:04:52.600 -R, --huge-unlink unlink huge files after initialization 00:04:52.600 -v, --version print SPDK version 00:04:52.600 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:04:52.600 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:04:52.600 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:04:52.600 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:04:52.600 Tracepoints vary in size and can use more than one trace entry. 00:04:52.600 --rpcs-allowed comma-separated list of permitted RPCS 00:04:52.600 --env-context Opaque context for use of the env implementation 00:04:52.600 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:04:52.600 --no-huge run without using hugepages 00:04:52.600 -L, --logflag enable log flag (all, json_util, log, rpc, thread[2024-04-15 20:31:36.021111] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:04:52.600 [2024-04-15 20:31:36.021403] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:04:52.600 , trace) 00:04:52.600 -e, --tpoint-group [:] 00:04:52.600 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:04:52.600 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:04:52.600 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:04:52.600 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:04:52.600 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:04:52.600 passed 00:04:52.600 00:04:52.600 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.600 suites 1 1 n/a 0 0 00:04:52.600 tests 1 1 1 0 0 00:04:52.600 asserts 8 8 8 0 n/a 00:04:52.600 00:04:52.600 Elapsed time = 0.000 seconds 00:04:52.600 [2024-04-15 20:31:36.021753] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:04:52.600 20:31:36 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:04:52.600 00:04:52.600 00:04:52.600 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.600 http://cunit.sourceforge.net/ 00:04:52.600 00:04:52.600 00:04:52.600 Suite: app_suite 00:04:52.600 Test: test_create_reactor ...passed 00:04:52.600 Test: test_init_reactors ...passed 00:04:52.600 Test: test_event_call ...passed 00:04:52.600 Test: test_schedule_thread ...passed 00:04:52.600 Test: test_reschedule_thread ...passed 00:04:52.600 Test: test_bind_thread ...passed 00:04:52.600 Test: test_for_each_reactor ...passed 00:04:52.600 Test: test_reactor_stats ...passed 00:04:52.600 Test: test_scheduler ...passed 00:04:52.600 Test: test_governor ...passed 00:04:52.600 00:04:52.600 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.600 suites 1 1 n/a 0 0 00:04:52.600 tests 10 10 10 0 0 00:04:52.600 asserts 344 344 344 0 n/a 00:04:52.600 00:04:52.600 Elapsed time = 0.000 seconds 00:04:52.864 ************************************ 00:04:52.864 END TEST unittest_event 00:04:52.864 ************************************ 00:04:52.864 00:04:52.864 real 0m0.094s 00:04:52.864 user 0m0.053s 00:04:52.864 sys 0m0.041s 00:04:52.864 20:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.864 20:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:52.864 20:31:36 -- unit/unittest.sh@233 -- # uname -s 00:04:52.864 20:31:36 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:04:52.864 20:31:36 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:04:52.864 20:31:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.864 20:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.864 20:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:52.864 ************************************ 00:04:52.864 START TEST unittest_ftl 00:04:52.864 ************************************ 00:04:52.864 20:31:36 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:04:52.864 20:31:36 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:04:52.864 00:04:52.864 00:04:52.864 CUnit - A unit testing framework for C - Version 2.1-3 00:04:52.864 http://cunit.sourceforge.net/ 00:04:52.864 00:04:52.864 00:04:52.864 Suite: ftl_band_suite 00:04:52.864 Test: test_band_block_offset_from_addr_base ...passed 00:04:52.864 Test: test_band_block_offset_from_addr_offset ...passed 00:04:52.864 Test: test_band_addr_from_block_offset ...passed 00:04:52.864 Test: test_band_set_addr ...passed 00:04:52.864 Test: test_invalidate_addr ...passed 00:04:52.864 Test: test_next_xfer_addr ...passed 00:04:52.864 00:04:52.864 Run Summary: Type Total Ran Passed Failed Inactive 00:04:52.864 suites 1 1 n/a 0 0 00:04:52.864 tests 6 6 6 0 0 00:04:52.864 asserts 30356 30356 30356 0 n/a 00:04:52.864 00:04:52.864 Elapsed time = 0.170 seconds 00:04:53.123 20:31:36 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:04:53.123 00:04:53.123 00:04:53.123 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.123 http://cunit.sourceforge.net/ 00:04:53.123 00:04:53.123 00:04:53.123 Suite: ftl_bitmap 00:04:53.123 Test: test_ftl_bitmap_create ...passed 00:04:53.123 Test: test_ftl_bitmap_get ...passed 00:04:53.123 Test: test_ftl_bitmap_set ...passed 00:04:53.123 Test: test_ftl_bitmap_clear ...passed 00:04:53.123 Test: test_ftl_bitmap_find_first_set ...passed 00:04:53.123 Test: test_ftl_bitmap_find_first_clear ...passed 00:04:53.123 Test: test_ftl_bitmap_count_set ...[2024-04-15 20:31:36.435791] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:04:53.123 [2024-04-15 20:31:36.436115] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:04:53.123 passed 00:04:53.123 00:04:53.123 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.123 suites 1 1 n/a 0 0 00:04:53.123 tests 7 7 7 0 0 00:04:53.123 asserts 137 137 137 0 n/a 00:04:53.123 00:04:53.123 Elapsed time = 0.000 seconds 00:04:53.123 20:31:36 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:04:53.123 00:04:53.123 00:04:53.123 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.123 http://cunit.sourceforge.net/ 00:04:53.123 00:04:53.123 00:04:53.123 Suite: ftl_io_suite 00:04:53.123 Test: test_completion ...passed 00:04:53.123 Test: test_multiple_ios ...passed 00:04:53.123 00:04:53.123 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.123 suites 1 1 n/a 0 0 00:04:53.123 tests 2 2 2 0 0 00:04:53.123 asserts 47 47 47 0 n/a 00:04:53.123 00:04:53.123 Elapsed time = 0.000 seconds 00:04:53.123 20:31:36 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:04:53.123 00:04:53.123 00:04:53.123 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.123 http://cunit.sourceforge.net/ 00:04:53.123 00:04:53.123 00:04:53.123 Suite: ftl_mngt 00:04:53.123 Test: test_next_step ...passed 00:04:53.123 Test: test_continue_step ...passed 00:04:53.123 Test: test_get_func_and_step_cntx_alloc ...passed 00:04:53.123 Test: test_fail_step ...passed 00:04:53.123 Test: test_mngt_call_and_call_rollback ...passed 00:04:53.123 Test: test_nested_process_failure ...passed 00:04:53.123 00:04:53.123 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.123 suites 1 1 n/a 0 0 00:04:53.123 tests 6 6 6 0 0 00:04:53.123 asserts 176 176 176 0 n/a 00:04:53.123 00:04:53.123 Elapsed time = 0.000 seconds 00:04:53.124 20:31:36 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:04:53.124 00:04:53.124 00:04:53.124 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.124 http://cunit.sourceforge.net/ 00:04:53.124 00:04:53.124 00:04:53.124 Suite: ftl_mempool 00:04:53.124 Test: test_ftl_mempool_create ...passed 00:04:53.124 Test: test_ftl_mempool_get_put ...passed 00:04:53.124 00:04:53.124 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.124 suites 1 1 n/a 0 0 00:04:53.124 tests 2 2 2 0 0 00:04:53.124 asserts 36 36 36 0 n/a 00:04:53.124 00:04:53.124 Elapsed time = 0.000 seconds 00:04:53.124 20:31:36 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:04:53.124 00:04:53.124 00:04:53.124 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.124 http://cunit.sourceforge.net/ 00:04:53.124 00:04:53.124 00:04:53.124 Suite: ftl_addr64_suite 00:04:53.124 Test: test_addr_cached ...passed 00:04:53.124 00:04:53.124 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.124 suites 1 1 n/a 0 0 00:04:53.124 tests 1 1 1 0 0 00:04:53.124 asserts 1536 1536 1536 0 n/a 00:04:53.124 00:04:53.124 Elapsed time = 0.000 seconds 00:04:53.124 20:31:36 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:04:53.124 00:04:53.124 00:04:53.124 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.124 http://cunit.sourceforge.net/ 00:04:53.124 00:04:53.124 00:04:53.124 Suite: ftl_sb 00:04:53.124 Test: test_sb_crc_v2 ...passed 00:04:53.124 Test: test_sb_crc_v3 ...passed 00:04:53.124 Test: test_sb_v3_md_layout ...[2024-04-15 20:31:36.593739] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:04:53.124 [2024-04-15 20:31:36.594022] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:04:53.124 [2024-04-15 20:31:36.594067] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:04:53.124 [2024-04-15 20:31:36.594112] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:04:53.124 [2024-04-15 20:31:36.594152] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:04:53.124 [2024-04-15 20:31:36.594248] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:04:53.124 [2024-04-15 20:31:36.594281] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:04:53.124 [2024-04-15 20:31:36.594336] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:04:53.124 passed 00:04:53.124 Test: test_sb_v5_md_layout ...[2024-04-15 20:31:36.594393] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:04:53.124 [2024-04-15 20:31:36.594434] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:04:53.124 [2024-04-15 20:31:36.594476] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:04:53.124 passed 00:04:53.124 00:04:53.124 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.124 suites 1 1 n/a 0 0 00:04:53.124 tests 4 4 4 0 0 00:04:53.124 asserts 148 148 148 0 n/a 00:04:53.124 00:04:53.124 Elapsed time = 0.000 seconds 00:04:53.124 20:31:36 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:04:53.383 00:04:53.383 00:04:53.383 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.383 http://cunit.sourceforge.net/ 00:04:53.383 00:04:53.383 00:04:53.383 Suite: ftl_layout_upgrade 00:04:53.383 Test: test_l2p_upgrade ...passed 00:04:53.383 00:04:53.383 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.383 suites 1 1 n/a 0 0 00:04:53.383 tests 1 1 1 0 0 00:04:53.383 asserts 140 140 140 0 n/a 00:04:53.383 00:04:53.383 Elapsed time = 0.000 seconds 00:04:53.383 ************************************ 00:04:53.383 END TEST unittest_ftl 00:04:53.383 ************************************ 00:04:53.383 00:04:53.383 real 0m0.486s 00:04:53.383 user 0m0.218s 00:04:53.383 sys 0m0.271s 00:04:53.383 20:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.383 20:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.383 20:31:36 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:04:53.383 20:31:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.383 20:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.383 20:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.383 ************************************ 00:04:53.383 START TEST unittest_accel 00:04:53.383 ************************************ 00:04:53.383 20:31:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:04:53.383 00:04:53.383 00:04:53.383 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.383 http://cunit.sourceforge.net/ 00:04:53.383 00:04:53.383 00:04:53.383 Suite: accel_sequence 00:04:53.383 Test: test_sequence_fill_copy ...passed 00:04:53.383 Test: test_sequence_abort ...passed 00:04:53.383 Test: test_sequence_append_error ...passed 00:04:53.383 Test: test_sequence_completion_error ...passed 00:04:53.383 Test: test_sequence_copy_elision ...[2024-04-15 20:31:36.730135] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f8499c2c7c0 00:04:53.383 [2024-04-15 20:31:36.730384] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f8499c2c7c0 00:04:53.383 [2024-04-15 20:31:36.730416] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f8499c2c7c0 00:04:53.383 [2024-04-15 20:31:36.730462] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f8499c2c7c0 00:04:53.383 passed 00:04:53.383 Test: test_sequence_accel_buffers ...passed 00:04:53.383 Test: test_sequence_memory_domain ...passed 00:04:53.383 Test: test_sequence_module_memory_domain ...[2024-04-15 20:31:36.734504] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:04:53.383 [2024-04-15 20:31:36.734614] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:04:53.383 passed 00:04:53.383 Test: test_sequence_driver ...passed 00:04:53.383 Test: test_sequence_same_iovs ...[2024-04-15 20:31:36.737233] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f84992d97c0 using driver: ut 00:04:53.383 [2024-04-15 20:31:36.737317] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f84992d97c0 through driver: ut 00:04:53.383 passed 00:04:53.383 Test: test_sequence_crc32 ...passed 00:04:53.383 Suite: accel 00:04:53.383 Test: test_spdk_accel_task_complete ...passed 00:04:53.383 Test: test_get_task ...passed 00:04:53.383 Test: test_spdk_accel_submit_copy ...passed 00:04:53.383 Test: test_spdk_accel_submit_dualcast ...passed 00:04:53.383 Test: test_spdk_accel_submit_compare ...passed 00:04:53.383 Test: test_spdk_accel_submit_fill ...passed 00:04:53.383 Test: test_spdk_accel_submit_crc32c ...passed 00:04:53.383 Test: test_spdk_accel_submit_crc32cv ...passed 00:04:53.383 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:04:53.383 Test: test_spdk_accel_submit_xor ...passed 00:04:53.383 Test: test_spdk_accel_module_find_by_name ...passed 00:04:53.383 Test: test_spdk_accel_module_register ...passed 00:04:53.383 00:04:53.384 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.384 suites 2 2 n/a 0 0 00:04:53.384 tests 23 23 23 0 0 00:04:53.384 asserts 754 754 754 0 n/a 00:04:53.384 00:04:53.384 Elapsed time = 0.020 seconds 00:04:53.384 [2024-04-15 20:31:36.739976] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:04:53.384 [2024-04-15 20:31:36.740025] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:04:53.384 ************************************ 00:04:53.384 END TEST unittest_accel 00:04:53.384 ************************************ 00:04:53.384 00:04:53.384 real 0m0.060s 00:04:53.384 user 0m0.029s 00:04:53.384 sys 0m0.031s 00:04:53.384 20:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.384 20:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.384 20:31:36 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:04:53.384 20:31:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.384 20:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.384 20:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.384 ************************************ 00:04:53.384 START TEST unittest_ioat 00:04:53.384 ************************************ 00:04:53.384 20:31:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:04:53.384 00:04:53.384 00:04:53.384 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.384 http://cunit.sourceforge.net/ 00:04:53.384 00:04:53.384 00:04:53.384 Suite: ioat 00:04:53.384 Test: ioat_state_check ...passed 00:04:53.384 00:04:53.384 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.384 suites 1 1 n/a 0 0 00:04:53.384 tests 1 1 1 0 0 00:04:53.384 asserts 32 32 32 0 n/a 00:04:53.384 00:04:53.384 Elapsed time = 0.000 seconds 00:04:53.384 00:04:53.384 real 0m0.037s 00:04:53.384 user 0m0.021s 00:04:53.384 sys 0m0.017s 00:04:53.384 20:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.384 ************************************ 00:04:53.384 END TEST unittest_ioat 00:04:53.384 ************************************ 00:04:53.384 20:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.644 20:31:36 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:53.644 20:31:36 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:04:53.644 20:31:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.644 20:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.644 20:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.644 ************************************ 00:04:53.644 START TEST unittest_idxd_user 00:04:53.644 ************************************ 00:04:53.644 20:31:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:04:53.644 00:04:53.644 00:04:53.644 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.644 http://cunit.sourceforge.net/ 00:04:53.644 00:04:53.644 00:04:53.644 Suite: idxd_user 00:04:53.644 Test: test_idxd_wait_cmd ...passed 00:04:53.644 Test: test_idxd_reset_dev ...passed 00:04:53.644 Test: test_idxd_group_config ...passed 00:04:53.644 Test: test_idxd_wq_config ...passed 00:04:53.644 00:04:53.644 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.644 suites 1 1 n/a 0 0 00:04:53.644 tests 4 4 4 0 0 00:04:53.644 asserts 20 20 20 0 n/a 00:04:53.644 00:04:53.644 Elapsed time = 0.000 seconds 00:04:53.644 [2024-04-15 20:31:36.933319] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:04:53.644 [2024-04-15 20:31:36.933588] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:04:53.645 [2024-04-15 20:31:36.933706] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:04:53.645 [2024-04-15 20:31:36.933757] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:04:53.645 ************************************ 00:04:53.645 END TEST unittest_idxd_user 00:04:53.645 ************************************ 00:04:53.645 00:04:53.645 real 0m0.038s 00:04:53.645 user 0m0.014s 00:04:53.645 sys 0m0.025s 00:04:53.645 20:31:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.645 20:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.645 20:31:36 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:04:53.645 20:31:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.645 20:31:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.645 20:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.645 ************************************ 00:04:53.645 START TEST unittest_iscsi 00:04:53.645 ************************************ 00:04:53.645 20:31:37 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:04:53.645 20:31:37 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:04:53.645 00:04:53.645 00:04:53.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.645 http://cunit.sourceforge.net/ 00:04:53.645 00:04:53.645 00:04:53.645 Suite: conn_suite 00:04:53.645 Test: read_task_split_in_order_case ...passed 00:04:53.645 Test: read_task_split_reverse_order_case ...passed 00:04:53.645 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:04:53.645 Test: process_non_read_task_completion_test ...passed 00:04:53.645 Test: free_tasks_on_connection ...passed 00:04:53.645 Test: free_tasks_with_queued_datain ...passed 00:04:53.645 Test: abort_queued_datain_task_test ...passed 00:04:53.645 Test: abort_queued_datain_tasks_test ...passed 00:04:53.645 00:04:53.645 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.645 suites 1 1 n/a 0 0 00:04:53.645 tests 8 8 8 0 0 00:04:53.645 asserts 230 230 230 0 n/a 00:04:53.645 00:04:53.645 Elapsed time = 0.000 seconds 00:04:53.645 20:31:37 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:04:53.645 00:04:53.645 00:04:53.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.645 http://cunit.sourceforge.net/ 00:04:53.645 00:04:53.645 00:04:53.645 Suite: iscsi_suite 00:04:53.645 Test: param_negotiation_test ...passed 00:04:53.645 Test: list_negotiation_test ...passed 00:04:53.645 Test: parse_valid_test ...passed 00:04:53.645 Test: parse_invalid_test ...[2024-04-15 20:31:37.074541] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:04:53.645 [2024-04-15 20:31:37.074729] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:04:53.645 [2024-04-15 20:31:37.074763] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:04:53.645 [2024-04-15 20:31:37.074819] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:04:53.645 passed 00:04:53.645 00:04:53.645 [2024-04-15 20:31:37.074927] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:04:53.645 [2024-04-15 20:31:37.074994] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:04:53.645 [2024-04-15 20:31:37.075078] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:04:53.645 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.645 suites 1 1 n/a 0 0 00:04:53.645 tests 4 4 4 0 0 00:04:53.645 asserts 161 161 161 0 n/a 00:04:53.645 00:04:53.645 Elapsed time = 0.000 seconds 00:04:53.645 20:31:37 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:04:53.645 00:04:53.645 00:04:53.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.645 http://cunit.sourceforge.net/ 00:04:53.645 00:04:53.645 00:04:53.645 Suite: iscsi_target_node_suite 00:04:53.645 Test: add_lun_test_cases ...[2024-04-15 20:31:37.104122] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:04:53.645 passed 00:04:53.645 Test: allow_any_allowed ...passed 00:04:53.645 Test: allow_ipv6_allowed ...passed 00:04:53.645 Test: allow_ipv6_denied ...passed 00:04:53.645 Test: allow_ipv6_invalid ...passed 00:04:53.645 Test: allow_ipv4_allowed ...passed 00:04:53.645 Test: allow_ipv4_denied ...passed 00:04:53.645 Test: allow_ipv4_invalid ...passed 00:04:53.645 Test: node_access_allowed ...[2024-04-15 20:31:37.104324] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:04:53.645 [2024-04-15 20:31:37.104389] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:04:53.645 [2024-04-15 20:31:37.104440] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:04:53.645 [2024-04-15 20:31:37.104463] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:04:53.645 passed 00:04:53.645 Test: node_access_denied_by_empty_netmask ...passed 00:04:53.645 Test: node_access_multi_initiator_groups_cases ...passed 00:04:53.645 Test: allow_iscsi_name_multi_maps_case ...passed 00:04:53.645 Test: chap_param_test_cases ...[2024-04-15 20:31:37.104795] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:04:53.645 [2024-04-15 20:31:37.104824] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:04:53.645 passed 00:04:53.645 00:04:53.645 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.645 suites 1 1 n/a 0 0 00:04:53.645 tests 13 13 13 0 0 00:04:53.645 asserts 50 50 50 0 n/a 00:04:53.645 00:04:53.645 Elapsed time = 0.000 seconds 00:04:53.645 [2024-04-15 20:31:37.104866] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:04:53.645 [2024-04-15 20:31:37.104894] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:04:53.645 [2024-04-15 20:31:37.104923] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:04:53.645 20:31:37 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:04:53.645 00:04:53.645 00:04:53.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.645 http://cunit.sourceforge.net/ 00:04:53.645 00:04:53.645 00:04:53.645 Suite: iscsi_suite 00:04:53.645 Test: op_login_check_target_test ...passed 00:04:53.645 Test: op_login_session_normal_test ...passed 00:04:53.645 Test: maxburstlength_test ...[2024-04-15 20:31:37.139003] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:04:53.645 [2024-04-15 20:31:37.139331] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:04:53.645 [2024-04-15 20:31:37.139367] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:04:53.645 [2024-04-15 20:31:37.139407] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:04:53.645 [2024-04-15 20:31:37.139452] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:04:53.645 [2024-04-15 20:31:37.139549] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:04:53.645 [2024-04-15 20:31:37.139689] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:04:53.645 [2024-04-15 20:31:37.139743] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:04:53.645 passed 00:04:53.645 Test: underflow_for_read_transfer_test ...passed 00:04:53.645 Test: underflow_for_zero_read_transfer_test ...[2024-04-15 20:31:37.139986] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:04:53.645 [2024-04-15 20:31:37.140037] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:04:53.645 passed 00:04:53.645 Test: underflow_for_request_sense_test ...passed 00:04:53.645 Test: underflow_for_check_condition_test ...passed 00:04:53.645 Test: add_transfer_task_test ...passed 00:04:53.645 Test: get_transfer_task_test ...passed 00:04:53.645 Test: del_transfer_task_test ...passed 00:04:53.645 Test: clear_all_transfer_tasks_test ...passed 00:04:53.645 Test: build_iovs_test ...passed 00:04:53.645 Test: build_iovs_with_md_test ...passed 00:04:53.645 Test: pdu_hdr_op_login_test ...[2024-04-15 20:31:37.140807] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:04:53.645 [2024-04-15 20:31:37.140898] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:04:53.645 passed 00:04:53.645 Test: pdu_hdr_op_text_test ...[2024-04-15 20:31:37.140951] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:04:53.645 [2024-04-15 20:31:37.141019] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:04:53.645 [2024-04-15 20:31:37.141099] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:04:53.645 [2024-04-15 20:31:37.141148] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:04:53.645 passed 00:04:53.645 Test: pdu_hdr_op_logout_test ...passed 00:04:53.645 Test: pdu_hdr_op_scsi_test ...[2024-04-15 20:31:37.141206] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:04:53.646 [2024-04-15 20:31:37.141354] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:04:53.646 [2024-04-15 20:31:37.141392] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:04:53.646 [2024-04-15 20:31:37.141436] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:04:53.646 [2024-04-15 20:31:37.141508] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:04:53.646 [2024-04-15 20:31:37.141580] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:04:53.646 [2024-04-15 20:31:37.141673] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:04:53.646 passed 00:04:53.646 Test: pdu_hdr_op_task_mgmt_test ...[2024-04-15 20:31:37.141738] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:04:53.646 [2024-04-15 20:31:37.141790] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:04:53.646 passed 00:04:53.646 Test: pdu_hdr_op_nopout_test ...[2024-04-15 20:31:37.141901] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:04:53.646 [2024-04-15 20:31:37.141953] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:04:53.646 [2024-04-15 20:31:37.141980] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:04:53.646 [2024-04-15 20:31:37.142012] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:04:53.646 passed 00:04:53.646 Test: pdu_hdr_op_data_test ...[2024-04-15 20:31:37.142054] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:04:53.646 [2024-04-15 20:31:37.142108] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:04:53.646 [2024-04-15 20:31:37.142167] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:04:53.646 [2024-04-15 20:31:37.142220] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:04:53.646 [2024-04-15 20:31:37.142263] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:04:53.646 [2024-04-15 20:31:37.142309] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:04:53.646 [2024-04-15 20:31:37.142345] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:04:53.646 passed 00:04:53.646 Test: empty_text_with_cbit_test ...passed 00:04:53.905 Test: pdu_payload_read_test ...[2024-04-15 20:31:37.143517] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:04:53.905 passed 00:04:53.905 Test: data_out_pdu_sequence_test ...passed 00:04:53.905 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:04:53.905 00:04:53.905 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.905 suites 1 1 n/a 0 0 00:04:53.905 tests 24 24 24 0 0 00:04:53.905 asserts 150253 150253 150253 0 n/a 00:04:53.905 00:04:53.905 Elapsed time = 0.020 seconds 00:04:53.905 20:31:37 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:04:53.905 00:04:53.905 00:04:53.905 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.905 http://cunit.sourceforge.net/ 00:04:53.905 00:04:53.905 00:04:53.905 Suite: init_grp_suite 00:04:53.905 Test: create_initiator_group_success_case ...passed 00:04:53.905 Test: find_initiator_group_success_case ...passed 00:04:53.905 Test: register_initiator_group_twice_case ...passed 00:04:53.905 Test: add_initiator_name_success_case ...passed 00:04:53.905 Test: add_initiator_name_fail_case ...passed 00:04:53.905 Test: delete_all_initiator_names_success_case ...passed 00:04:53.905 Test: add_netmask_success_case ...passed 00:04:53.905 Test: add_netmask_fail_case ...passed 00:04:53.906 Test: delete_all_netmasks_success_case ...passed 00:04:53.906 Test: initiator_name_overwrite_all_to_any_case ...passed 00:04:53.906 Test: netmask_overwrite_all_to_any_case ...passed 00:04:53.906 Test: add_delete_initiator_names_case ...passed 00:04:53.906 Test: add_duplicated_initiator_names_case ...passed 00:04:53.906 Test: delete_nonexisting_initiator_names_case ...passed 00:04:53.906 Test: add_delete_netmasks_case ...passed 00:04:53.906 Test: add_duplicated_netmasks_case ...[2024-04-15 20:31:37.181008] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:04:53.906 [2024-04-15 20:31:37.181362] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:04:53.906 passed 00:04:53.906 Test: delete_nonexisting_netmasks_case ...passed 00:04:53.906 00:04:53.906 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.906 suites 1 1 n/a 0 0 00:04:53.906 tests 17 17 17 0 0 00:04:53.906 asserts 108 108 108 0 n/a 00:04:53.906 00:04:53.906 Elapsed time = 0.000 seconds 00:04:53.906 20:31:37 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:04:53.906 00:04:53.906 00:04:53.906 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.906 http://cunit.sourceforge.net/ 00:04:53.906 00:04:53.906 00:04:53.906 Suite: portal_grp_suite 00:04:53.906 Test: portal_create_ipv4_normal_case ...passed 00:04:53.906 Test: portal_create_ipv6_normal_case ...passed 00:04:53.906 Test: portal_create_ipv4_wildcard_case ...passed 00:04:53.906 Test: portal_create_ipv6_wildcard_case ...passed 00:04:53.906 Test: portal_create_twice_case ...passed 00:04:53.906 Test: portal_grp_register_unregister_case ...passed 00:04:53.906 Test: portal_grp_register_twice_case ...passed 00:04:53.906 Test: portal_grp_add_delete_case ...[2024-04-15 20:31:37.209199] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:04:53.906 passed 00:04:53.906 Test: portal_grp_add_delete_twice_case ...passed 00:04:53.906 00:04:53.906 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.906 suites 1 1 n/a 0 0 00:04:53.906 tests 9 9 9 0 0 00:04:53.906 asserts 44 44 44 0 n/a 00:04:53.906 00:04:53.906 Elapsed time = 0.000 seconds 00:04:53.906 ************************************ 00:04:53.906 END TEST unittest_iscsi 00:04:53.906 ************************************ 00:04:53.906 00:04:53.906 real 0m0.208s 00:04:53.906 user 0m0.121s 00:04:53.906 sys 0m0.090s 00:04:53.906 20:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.906 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:53.906 20:31:37 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:04:53.906 20:31:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.906 20:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.906 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:53.906 ************************************ 00:04:53.906 START TEST unittest_json 00:04:53.906 ************************************ 00:04:53.906 20:31:37 -- common/autotest_common.sh@1104 -- # unittest_json 00:04:53.906 20:31:37 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:04:53.906 00:04:53.906 00:04:53.906 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.906 http://cunit.sourceforge.net/ 00:04:53.906 00:04:53.906 00:04:53.906 Suite: json 00:04:53.906 Test: test_parse_literal ...passed 00:04:53.906 Test: test_parse_string_simple ...passed 00:04:53.906 Test: test_parse_string_control_chars ...passed 00:04:53.906 Test: test_parse_string_utf8 ...passed 00:04:53.906 Test: test_parse_string_escapes_twochar ...passed 00:04:53.906 Test: test_parse_string_escapes_unicode ...passed 00:04:53.906 Test: test_parse_number ...passed 00:04:53.906 Test: test_parse_array ...passed 00:04:53.906 Test: test_parse_object ...passed 00:04:53.906 Test: test_parse_nesting ...passed 00:04:53.906 Test: test_parse_comment ...passed 00:04:53.906 00:04:53.906 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.906 suites 1 1 n/a 0 0 00:04:53.906 tests 11 11 11 0 0 00:04:53.906 asserts 1516 1516 1516 0 n/a 00:04:53.906 00:04:53.906 Elapsed time = 0.000 seconds 00:04:53.906 20:31:37 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:04:53.906 00:04:53.906 00:04:53.906 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.906 http://cunit.sourceforge.net/ 00:04:53.906 00:04:53.906 00:04:53.906 Suite: json 00:04:53.906 Test: test_strequal ...passed 00:04:53.906 Test: test_num_to_uint16 ...passed 00:04:53.906 Test: test_num_to_int32 ...passed 00:04:53.906 Test: test_num_to_uint64 ...passed 00:04:53.906 Test: test_decode_object ...passed 00:04:53.906 Test: test_decode_array ...passed 00:04:53.906 Test: test_decode_bool ...passed 00:04:53.906 Test: test_decode_uint16 ...passed 00:04:53.906 Test: test_decode_int32 ...passed 00:04:53.906 Test: test_decode_uint32 ...passed 00:04:53.906 Test: test_decode_uint64 ...passed 00:04:53.906 Test: test_decode_string ...passed 00:04:53.906 Test: test_decode_uuid ...passed 00:04:53.906 Test: test_find ...passed 00:04:53.906 Test: test_find_array ...passed 00:04:53.906 Test: test_iterating ...passed 00:04:53.906 Test: test_free_object ...passed 00:04:53.906 00:04:53.906 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.906 suites 1 1 n/a 0 0 00:04:53.906 tests 17 17 17 0 0 00:04:53.906 asserts 236 236 236 0 n/a 00:04:53.906 00:04:53.906 Elapsed time = 0.000 seconds 00:04:53.906 20:31:37 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:04:53.906 00:04:53.906 00:04:53.906 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.906 http://cunit.sourceforge.net/ 00:04:53.906 00:04:53.906 00:04:53.906 Suite: json 00:04:53.906 Test: test_write_literal ...passed 00:04:53.906 Test: test_write_string_simple ...passed 00:04:53.906 Test: test_write_string_escapes ...passed 00:04:53.906 Test: test_write_string_utf16le ...passed 00:04:53.906 Test: test_write_number_int32 ...passed 00:04:53.906 Test: test_write_number_uint32 ...passed 00:04:53.906 Test: test_write_number_uint128 ...passed 00:04:53.906 Test: test_write_string_number_uint128 ...passed 00:04:53.906 Test: test_write_number_int64 ...passed 00:04:53.906 Test: test_write_number_uint64 ...passed 00:04:53.906 Test: test_write_number_double ...passed 00:04:53.906 Test: test_write_uuid ...passed 00:04:53.906 Test: test_write_array ...passed 00:04:53.906 Test: test_write_object ...passed 00:04:53.906 Test: test_write_nesting ...passed 00:04:53.906 Test: test_write_val ...passed 00:04:53.906 00:04:53.906 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.906 suites 1 1 n/a 0 0 00:04:53.906 tests 16 16 16 0 0 00:04:53.906 asserts 918 918 918 0 n/a 00:04:53.906 00:04:53.906 Elapsed time = 0.000 seconds 00:04:53.906 20:31:37 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:04:54.166 00:04:54.166 00:04:54.166 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.166 http://cunit.sourceforge.net/ 00:04:54.166 00:04:54.166 00:04:54.166 Suite: jsonrpc 00:04:54.166 Test: test_parse_request ...passed 00:04:54.166 Test: test_parse_request_streaming ...passed 00:04:54.166 00:04:54.166 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.166 suites 1 1 n/a 0 0 00:04:54.166 tests 2 2 2 0 0 00:04:54.166 asserts 289 289 289 0 n/a 00:04:54.166 00:04:54.166 Elapsed time = 0.000 seconds 00:04:54.166 ************************************ 00:04:54.166 END TEST unittest_json 00:04:54.166 ************************************ 00:04:54.166 00:04:54.166 real 0m0.151s 00:04:54.166 user 0m0.069s 00:04:54.166 sys 0m0.084s 00:04:54.166 20:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.166 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:54.166 20:31:37 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:04:54.166 20:31:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.166 20:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.166 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:54.166 ************************************ 00:04:54.166 START TEST unittest_rpc 00:04:54.166 ************************************ 00:04:54.166 20:31:37 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:04:54.166 20:31:37 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:04:54.166 00:04:54.166 00:04:54.166 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.166 http://cunit.sourceforge.net/ 00:04:54.166 00:04:54.166 00:04:54.166 Suite: rpc 00:04:54.166 Test: test_jsonrpc_handler ...passed 00:04:54.166 Test: test_spdk_rpc_is_method_allowed ...passed 00:04:54.166 Test: test_rpc_get_methods ...passed 00:04:54.166 Test: test_rpc_spdk_get_version ...passed 00:04:54.166 Test: test_spdk_rpc_listen_close ...passed 00:04:54.166 00:04:54.166 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.166 suites 1 1 n/a 0 0 00:04:54.166 tests 5 5 5 0 0 00:04:54.167 asserts 20 20 20 0 n/a 00:04:54.167 00:04:54.167 Elapsed time = 0.000 seconds 00:04:54.167 [2024-04-15 20:31:37.520019] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:04:54.167 ************************************ 00:04:54.167 END TEST unittest_rpc 00:04:54.167 ************************************ 00:04:54.167 00:04:54.167 real 0m0.034s 00:04:54.167 user 0m0.021s 00:04:54.167 sys 0m0.014s 00:04:54.167 20:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.167 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:54.167 20:31:37 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:04:54.167 20:31:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.167 20:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.167 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:54.167 ************************************ 00:04:54.167 START TEST unittest_notify 00:04:54.167 ************************************ 00:04:54.167 20:31:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:04:54.167 00:04:54.167 00:04:54.167 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.167 http://cunit.sourceforge.net/ 00:04:54.167 00:04:54.167 00:04:54.167 Suite: app_suite 00:04:54.167 Test: notify ...passed 00:04:54.167 00:04:54.167 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.167 suites 1 1 n/a 0 0 00:04:54.167 tests 1 1 1 0 0 00:04:54.167 asserts 13 13 13 0 n/a 00:04:54.167 00:04:54.167 Elapsed time = 0.000 seconds 00:04:54.167 ************************************ 00:04:54.167 END TEST unittest_notify 00:04:54.167 ************************************ 00:04:54.167 00:04:54.167 real 0m0.035s 00:04:54.167 user 0m0.016s 00:04:54.167 sys 0m0.019s 00:04:54.167 20:31:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.167 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:54.426 20:31:37 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:04:54.427 20:31:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.427 20:31:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.427 20:31:37 -- common/autotest_common.sh@10 -- # set +x 00:04:54.427 ************************************ 00:04:54.427 START TEST unittest_nvme 00:04:54.427 ************************************ 00:04:54.427 20:31:37 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:04:54.427 20:31:37 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:04:54.427 00:04:54.427 00:04:54.427 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.427 http://cunit.sourceforge.net/ 00:04:54.427 00:04:54.427 00:04:54.427 Suite: nvme 00:04:54.427 Test: test_opc_data_transfer ...passed 00:04:54.427 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:04:54.427 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:04:54.427 Test: test_trid_parse_and_compare ...[2024-04-15 20:31:37.710608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:04:54.427 [2024-04-15 20:31:37.710913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:04:54.427 [2024-04-15 20:31:37.711009] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:04:54.427 [2024-04-15 20:31:37.711058] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:04:54.427 [2024-04-15 20:31:37.711099] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:04:54.427 passed 00:04:54.427 Test: test_trid_trtype_str ...passed 00:04:54.427 Test: test_trid_adrfam_str ...passed 00:04:54.427 Test: test_nvme_ctrlr_probe ...passed 00:04:54.427 Test: test_spdk_nvme_probe ...passed 00:04:54.427 Test: test_spdk_nvme_connect ...passed 00:04:54.427 Test: test_nvme_ctrlr_probe_internal ...passed 00:04:54.427 Test: test_nvme_init_controllers ...passed 00:04:54.427 Test: test_nvme_driver_init ...[2024-04-15 20:31:37.711198] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:04:54.427 [2024-04-15 20:31:37.711586] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:04:54.427 [2024-04-15 20:31:37.711707] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:04:54.427 [2024-04-15 20:31:37.711760] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:04:54.427 [2024-04-15 20:31:37.711808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:04:54.427 [2024-04-15 20:31:37.711846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:04:54.427 [2024-04-15 20:31:37.711944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:04:54.427 [2024-04-15 20:31:37.712108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:04:54.427 [2024-04-15 20:31:37.712169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:04:54.427 [2024-04-15 20:31:37.712319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:04:54.427 [2024-04-15 20:31:37.712365] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:04:54.427 [2024-04-15 20:31:37.712481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:04:54.427 [2024-04-15 20:31:37.712579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:04:54.427 [2024-04-15 20:31:37.712612] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:04:54.427 passed 00:04:54.427 Test: test_spdk_nvme_detach ...passed 00:04:54.427 Test: test_nvme_completion_poll_cb ...passed 00:04:54.427 Test: test_nvme_user_copy_cmd_complete ...[2024-04-15 20:31:37.822123] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:04:54.427 [2024-04-15 20:31:37.822385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:04:54.427 passed 00:04:54.427 Test: test_nvme_allocate_request_null ...passed 00:04:54.427 Test: test_nvme_allocate_request ...passed 00:04:54.427 Test: test_nvme_free_request ...passed 00:04:54.427 Test: test_nvme_allocate_request_user_copy ...passed 00:04:54.427 Test: test_nvme_robust_mutex_init_shared ...passed 00:04:54.427 Test: test_nvme_request_check_timeout ...passed 00:04:54.427 Test: test_nvme_wait_for_completion ...passed 00:04:54.427 Test: test_spdk_nvme_parse_func ...passed 00:04:54.427 Test: test_spdk_nvme_detach_async ...passed 00:04:54.427 Test: test_nvme_parse_addr ...passed 00:04:54.427 00:04:54.427 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.427 suites 1 1 n/a 0 0 00:04:54.427 tests 25 25 25 0 0 00:04:54.427 asserts 326 326 326 0 n/a 00:04:54.427 00:04:54.427 Elapsed time = 0.000 seconds 00:04:54.427 [2024-04-15 20:31:37.823683] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:04:54.427 20:31:37 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:04:54.427 00:04:54.427 00:04:54.427 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.427 http://cunit.sourceforge.net/ 00:04:54.427 00:04:54.427 00:04:54.427 Suite: nvme_ctrlr 00:04:54.427 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-04-15 20:31:37.851767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 passed 00:04:54.427 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-04-15 20:31:37.853385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 passed 00:04:54.427 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-04-15 20:31:37.854570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 passed 00:04:54.427 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-04-15 20:31:37.855741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 passed 00:04:54.427 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-04-15 20:31:37.856898] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 [2024-04-15 20:31:37.858017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-15 20:31:37.859161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-15 20:31:37.860275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:04:54.427 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-04-15 20:31:37.862492] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 [2024-04-15 20:31:37.864695] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-15 20:31:37.865835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:04:54.427 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-04-15 20:31:37.868081] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 [2024-04-15 20:31:37.869220] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-04-15 20:31:37.871448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:04:54.427 Test: test_nvme_ctrlr_init_delay ...[2024-04-15 20:31:37.873717] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 passed 00:04:54.427 Test: test_alloc_io_qpair_rr_1 ...passed 00:04:54.427 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:04:54.427 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:04:54.427 Test: test_alloc_io_qpair_wrr_1 ...passed 00:04:54.427 Test: test_alloc_io_qpair_wrr_2 ...passed 00:04:54.427 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-04-15 20:31:37.874913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 [2024-04-15 20:31:37.875019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:04:54.427 [2024-04-15 20:31:37.875151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:04:54.427 [2024-04-15 20:31:37.875200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:04:54.427 [2024-04-15 20:31:37.875244] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:04:54.427 [2024-04-15 20:31:37.875415] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 [2024-04-15 20:31:37.875510] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.427 [2024-04-15 20:31:37.875579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:04:54.427 [2024-04-15 20:31:37.875742] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4832:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:04:54.427 [2024-04-15 20:31:37.875857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:04:54.427 passed 00:04:54.427 Test: test_nvme_ctrlr_fail ...passed 00:04:54.427 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:04:54.427 Test: test_nvme_ctrlr_set_supported_features ...passed 00:04:54.427 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:04:54.427 Test: test_nvme_ctrlr_test_active_ns ...[2024-04-15 20:31:37.875904] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4909:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:04:54.427 [2024-04-15 20:31:37.875937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:04:54.428 [2024-04-15 20:31:37.876000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:04:54.428 [2024-04-15 20:31:37.876190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:04:54.688 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:04:54.688 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:04:54.688 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-04-15 20:31:37.991421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-04-15 20:31:37.998084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-04-15 20:31:37.999215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 [2024-04-15 20:31:37.999264] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2869:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:04:54.688 passed 00:04:54.688 Test: test_alloc_io_qpair_fail ...passed 00:04:54.688 Test: test_nvme_ctrlr_add_remove_process ...passed 00:04:54.688 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:04:54.688 Test: test_nvme_ctrlr_set_state ...passed 00:04:54.688 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-04-15 20:31:38.000362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 [2024-04-15 20:31:38.000429] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:04:54.688 [2024-04-15 20:31:38.000509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1464:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:04:54.688 [2024-04-15 20:31:38.000535] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-04-15 20:31:38.013825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 Test: test_nvme_ctrlr_ns_mgmt ...[2024-04-15 20:31:38.041497] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 Test: test_nvme_ctrlr_reset ...passed 00:04:54.688 Test: test_nvme_ctrlr_aer_callback ...[2024-04-15 20:31:38.042805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 [2024-04-15 20:31:38.042986] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-04-15 20:31:38.044236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:04:54.688 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:04:54.688 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-04-15 20:31:38.045631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:04:54.688 Test: test_nvme_ctrlr_ana_resize ...[2024-04-15 20:31:38.046850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:04:54.688 Test: test_nvme_transport_ctrlr_ready ...passed 00:04:54.688 Test: test_nvme_ctrlr_disable ...[2024-04-15 20:31:38.048139] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:04:54.688 [2024-04-15 20:31:38.048181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:04:54.688 [2024-04-15 20:31:38.048212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:54.688 passed 00:04:54.688 00:04:54.688 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.688 suites 1 1 n/a 0 0 00:04:54.688 tests 43 43 43 0 0 00:04:54.688 asserts 10418 10418 10418 0 n/a 00:04:54.688 00:04:54.688 Elapsed time = 0.160 seconds 00:04:54.688 20:31:38 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:04:54.688 00:04:54.688 00:04:54.688 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.688 http://cunit.sourceforge.net/ 00:04:54.688 00:04:54.688 00:04:54.688 Suite: nvme_ctrlr_cmd 00:04:54.688 Test: test_get_log_pages ...passed 00:04:54.688 Test: test_set_feature_cmd ...passed 00:04:54.688 Test: test_set_feature_ns_cmd ...passed 00:04:54.688 Test: test_get_feature_cmd ...passed 00:04:54.688 Test: test_get_feature_ns_cmd ...passed 00:04:54.688 Test: test_abort_cmd ...passed 00:04:54.688 Test: test_set_host_id_cmds ...passed 00:04:54.688 Test: test_io_cmd_raw_no_payload_build ...passed 00:04:54.688 Test: test_io_raw_cmd ...passed[2024-04-15 20:31:38.091553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:04:54.688 00:04:54.688 Test: test_io_raw_cmd_with_md ...passed 00:04:54.688 Test: test_namespace_attach ...passed 00:04:54.688 Test: test_namespace_detach ...passed 00:04:54.688 Test: test_namespace_create ...passed 00:04:54.688 Test: test_namespace_delete ...passed 00:04:54.688 Test: test_doorbell_buffer_config ...passed 00:04:54.688 Test: test_format_nvme ...passed 00:04:54.688 Test: test_fw_commit ...passed 00:04:54.688 Test: test_fw_image_download ...passed 00:04:54.688 Test: test_sanitize ...passed 00:04:54.688 Test: test_directive ...passed 00:04:54.688 Test: test_nvme_request_add_abort ...passed 00:04:54.688 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:04:54.688 Test: test_nvme_ctrlr_cmd_identify ...passed 00:04:54.688 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:04:54.688 00:04:54.688 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.688 suites 1 1 n/a 0 0 00:04:54.688 tests 24 24 24 0 0 00:04:54.688 asserts 198 198 198 0 n/a 00:04:54.688 00:04:54.688 Elapsed time = 0.000 seconds 00:04:54.689 20:31:38 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:04:54.689 00:04:54.689 00:04:54.689 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.689 http://cunit.sourceforge.net/ 00:04:54.689 00:04:54.689 00:04:54.689 Suite: nvme_ctrlr_cmd 00:04:54.689 Test: test_geometry_cmd ...passed 00:04:54.689 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:04:54.689 00:04:54.689 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.689 suites 1 1 n/a 0 0 00:04:54.689 tests 2 2 2 0 0 00:04:54.689 asserts 7 7 7 0 n/a 00:04:54.689 00:04:54.689 Elapsed time = 0.000 seconds 00:04:54.689 20:31:38 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:04:54.689 00:04:54.689 00:04:54.689 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.689 http://cunit.sourceforge.net/ 00:04:54.689 00:04:54.689 00:04:54.689 Suite: nvme 00:04:54.689 Test: test_nvme_ns_construct ...passed 00:04:54.689 Test: test_nvme_ns_uuid ...passed 00:04:54.689 Test: test_nvme_ns_csi ...passed 00:04:54.689 Test: test_nvme_ns_data ...passed 00:04:54.689 Test: test_nvme_ns_set_identify_data ...passed 00:04:54.689 Test: test_spdk_nvme_ns_get_values ...passed 00:04:54.689 Test: test_spdk_nvme_ns_is_active ...passed 00:04:54.689 Test: spdk_nvme_ns_supports ...passed 00:04:54.689 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:04:54.689 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:04:54.689 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:04:54.689 Test: test_nvme_ns_find_id_desc ...passed 00:04:54.689 00:04:54.689 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.689 suites 1 1 n/a 0 0 00:04:54.689 tests 12 12 12 0 0 00:04:54.689 asserts 83 83 83 0 n/a 00:04:54.689 00:04:54.689 Elapsed time = 0.000 seconds 00:04:54.689 20:31:38 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:04:54.689 00:04:54.689 00:04:54.689 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.689 http://cunit.sourceforge.net/ 00:04:54.689 00:04:54.689 00:04:54.689 Suite: nvme_ns_cmd 00:04:54.689 Test: split_test ...passed 00:04:54.689 Test: split_test2 ...passed 00:04:54.689 Test: split_test3 ...passed 00:04:54.689 Test: split_test4 ...passed 00:04:54.689 Test: test_nvme_ns_cmd_flush ...passed 00:04:54.689 Test: test_nvme_ns_cmd_dataset_management ...passed 00:04:54.689 Test: test_nvme_ns_cmd_copy ...passed 00:04:54.689 Test: test_io_flags ...passed 00:04:54.689 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:04:54.689 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:04:54.689 Test: test_nvme_ns_cmd_reservation_register ...passed 00:04:54.689 Test: test_nvme_ns_cmd_reservation_release ...passed 00:04:54.689 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:04:54.689 Test: test_nvme_ns_cmd_reservation_report ...passed 00:04:54.689 Test: test_cmd_child_request ...[2024-04-15 20:31:38.164492] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:04:54.689 passed 00:04:54.689 Test: test_nvme_ns_cmd_readv ...passed 00:04:54.689 Test: test_nvme_ns_cmd_read_with_md ...passed 00:04:54.689 Test: test_nvme_ns_cmd_writev ...passed 00:04:54.689 Test: test_nvme_ns_cmd_write_with_md ...[2024-04-15 20:31:38.165167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:04:54.689 passed 00:04:54.689 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:04:54.689 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:04:54.689 Test: test_nvme_ns_cmd_comparev ...passed 00:04:54.689 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:04:54.689 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:04:54.689 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:04:54.689 Test: test_nvme_ns_cmd_setup_request ...passed 00:04:54.689 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:04:54.689 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:04:54.689 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:04:54.689 Test: test_nvme_ns_cmd_verify ...passed 00:04:54.689 Test: test_nvme_ns_cmd_io_mgmt_send ...[2024-04-15 20:31:38.166175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:04:54.689 [2024-04-15 20:31:38.166242] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:04:54.689 passed 00:04:54.689 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:04:54.689 00:04:54.689 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.689 suites 1 1 n/a 0 0 00:04:54.689 tests 32 32 32 0 0 00:04:54.689 asserts 550 550 550 0 n/a 00:04:54.689 00:04:54.689 Elapsed time = 0.000 seconds 00:04:54.689 20:31:38 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:04:54.950 00:04:54.950 00:04:54.950 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.950 http://cunit.sourceforge.net/ 00:04:54.950 00:04:54.950 00:04:54.950 Suite: nvme_ns_cmd 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:04:54.950 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:04:54.950 00:04:54.950 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.950 suites 1 1 n/a 0 0 00:04:54.950 tests 12 12 12 0 0 00:04:54.950 asserts 123 123 123 0 n/a 00:04:54.950 00:04:54.950 Elapsed time = 0.010 seconds 00:04:54.950 20:31:38 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:04:54.950 00:04:54.950 00:04:54.950 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.950 http://cunit.sourceforge.net/ 00:04:54.950 00:04:54.950 00:04:54.950 Suite: nvme_qpair 00:04:54.950 Test: test3 ...passed 00:04:54.950 Test: test_ctrlr_failed ...passed 00:04:54.950 Test: struct_packing ...passed 00:04:54.950 Test: test_nvme_qpair_process_completions ...[2024-04-15 20:31:38.225954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:04:54.950 passed 00:04:54.950 Test: test_nvme_completion_is_retry ...passed 00:04:54.950 Test: test_get_status_string ...passed 00:04:54.950 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:04:54.950 Test: test_nvme_qpair_submit_request ...passed 00:04:54.950 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:04:54.950 Test: test_nvme_qpair_manual_complete_request ...passed 00:04:54.950 Test: test_nvme_qpair_init_deinit ...passed 00:04:54.950 Test: test_nvme_get_sgl_print_info ...passed 00:04:54.950 00:04:54.950 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.950 suites 1 1 n/a 0 0 00:04:54.950 tests 12 12 12 0 0 00:04:54.950 asserts 154 154 154 0 n/a 00:04:54.950 00:04:54.950 Elapsed time = 0.000 seconds 00:04:54.950 [2024-04-15 20:31:38.226684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:04:54.950 [2024-04-15 20:31:38.226783] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:04:54.950 [2024-04-15 20:31:38.226890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:04:54.950 [2024-04-15 20:31:38.227251] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:04:54.950 20:31:38 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:04:54.950 00:04:54.950 00:04:54.950 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.950 http://cunit.sourceforge.net/ 00:04:54.950 00:04:54.950 00:04:54.950 Suite: nvme_pcie 00:04:54.950 Test: test_prp_list_append ...passed 00:04:54.950 Test: test_nvme_pcie_hotplug_monitor ...passed 00:04:54.950 Test: test_shadow_doorbell_update ...passed 00:04:54.950 Test: test_build_contig_hw_sgl_request ...passed 00:04:54.950 Test: test_nvme_pcie_qpair_build_metadata ...[2024-04-15 20:31:38.263613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:04:54.950 [2024-04-15 20:31:38.264204] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:04:54.950 [2024-04-15 20:31:38.264252] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:04:54.950 [2024-04-15 20:31:38.264548] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:04:54.950 [2024-04-15 20:31:38.264664] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:04:54.950 passed 00:04:54.950 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:04:54.950 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:04:54.950 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:04:54.950 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:04:54.950 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:04:54.950 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:04:54.950 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-04-15 20:31:38.264928] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:04:54.950 [2024-04-15 20:31:38.265038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:04:54.950 passed 00:04:54.950 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:04:54.950 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:04:54.950 00:04:54.950 [2024-04-15 20:31:38.265141] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:04:54.950 [2024-04-15 20:31:38.265202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:04:54.950 [2024-04-15 20:31:38.265251] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:04:54.950 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.950 suites 1 1 n/a 0 0 00:04:54.950 tests 14 14 14 0 0 00:04:54.950 asserts 235 235 235 0 n/a 00:04:54.950 00:04:54.950 Elapsed time = 0.000 seconds 00:04:54.950 20:31:38 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:04:54.950 00:04:54.950 00:04:54.950 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.950 http://cunit.sourceforge.net/ 00:04:54.950 00:04:54.950 00:04:54.950 Suite: nvme_ns_cmd 00:04:54.950 Test: nvme_poll_group_create_test ...passed 00:04:54.950 Test: nvme_poll_group_add_remove_test ...passed 00:04:54.950 Test: nvme_poll_group_process_completions ...passed 00:04:54.950 Test: nvme_poll_group_destroy_test ...passed 00:04:54.950 Test: nvme_poll_group_get_free_stats ...passed 00:04:54.950 00:04:54.950 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.950 suites 1 1 n/a 0 0 00:04:54.950 tests 5 5 5 0 0 00:04:54.950 asserts 75 75 75 0 n/a 00:04:54.950 00:04:54.951 Elapsed time = 0.000 seconds 00:04:54.951 20:31:38 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:04:54.951 00:04:54.951 00:04:54.951 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.951 http://cunit.sourceforge.net/ 00:04:54.951 00:04:54.951 00:04:54.951 Suite: nvme_quirks 00:04:54.951 Test: test_nvme_quirks_striping ...passed 00:04:54.951 00:04:54.951 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.951 suites 1 1 n/a 0 0 00:04:54.951 tests 1 1 1 0 0 00:04:54.951 asserts 5 5 5 0 n/a 00:04:54.951 00:04:54.951 Elapsed time = 0.000 seconds 00:04:54.951 20:31:38 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:04:54.951 00:04:54.951 00:04:54.951 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.951 http://cunit.sourceforge.net/ 00:04:54.951 00:04:54.951 00:04:54.951 Suite: nvme_tcp 00:04:54.951 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:04:54.951 Test: test_nvme_tcp_build_iovs ...passed 00:04:54.951 Test: test_nvme_tcp_build_sgl_request ...[2024-04-15 20:31:38.357254] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7fff363b19a0, and the iovcnt=16, remaining_size=28672 00:04:54.951 passed 00:04:54.951 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:04:54.951 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:04:54.951 Test: test_nvme_tcp_req_complete_safe ...passed 00:04:54.951 Test: test_nvme_tcp_req_get ...passed 00:04:54.951 Test: test_nvme_tcp_req_init ...passed 00:04:54.951 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:04:54.951 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:04:54.951 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:04:54.951 Test: test_nvme_tcp_alloc_reqs ...passed 00:04:54.951 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:04:54.951 Test: test_nvme_tcp_pdu_ch_handle ...[2024-04-15 20:31:38.357946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b36b0 is same with the state(6) to be set 00:04:54.951 [2024-04-15 20:31:38.358285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2850 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.358358] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7fff363b3380 00:04:54.951 [2024-04-15 20:31:38.358421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:04:54.951 [2024-04-15 20:31:38.358515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2d10 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.358580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:04:54.951 passed 00:04:54.951 Test: test_nvme_tcp_qpair_connect_sock ...[2024-04-15 20:31:38.358934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2d10 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.359002] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:04:54.951 [2024-04-15 20:31:38.359036] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2d10 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.359080] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2d10 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.359127] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2d10 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.359212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2d10 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.359248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2d10 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.359301] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2d10 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.359442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:04:54.951 [2024-04-15 20:31:38.359493] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:04:54.951 passed 00:04:54.951 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:04:54.951 Test: test_nvme_tcp_c2h_payload_handle ...[2024-04-15 20:31:38.359815] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:04:54.951 [2024-04-15 20:31:38.359944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff363b2ec0): PDU Sequence Error 00:04:54.951 passed 00:04:54.951 Test: test_nvme_tcp_icresp_handle ...[2024-04-15 20:31:38.360094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:04:54.951 [2024-04-15 20:31:38.360135] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:04:54.951 [2024-04-15 20:31:38.360177] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2850 is same with the state(5) to be set 00:04:54.951 passed 00:04:54.951 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:04:54.951 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-04-15 20:31:38.360233] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:04:54.951 [2024-04-15 20:31:38.360277] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2850 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.360334] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b2850 is same with the state(0) to be set 00:04:54.951 [2024-04-15 20:31:38.360402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff363b3380): PDU Sequence Error 00:04:54.951 [2024-04-15 20:31:38.360503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7fff363b1b40 00:04:54.951 passed 00:04:54.951 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:04:54.951 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-04-15 20:31:38.360663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7fff363b11c0, errno=0, rc=0 00:04:54.951 [2024-04-15 20:31:38.360749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b11c0 is same with the state(5) to be set 00:04:54.951 passed 00:04:54.951 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-04-15 20:31:38.360815] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff363b11c0 is same with the state(5) to be set 00:04:54.951 [2024-04-15 20:31:38.360878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff363b11c0 (0): Success 00:04:54.951 [2024-04-15 20:31:38.360935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff363b11c0 (0): Success 00:04:54.951 passed 00:04:54.951 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:04:54.951 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:04:54.951 Test: test_nvme_tcp_ctrlr_construct ...[2024-04-15 20:31:38.410301] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:04:54.951 [2024-04-15 20:31:38.410395] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:04:54.951 [2024-04-15 20:31:38.410510] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:54.951 [2024-04-15 20:31:38.410527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:54.951 passed 00:04:54.951 Test: test_nvme_tcp_qpair_submit_request ...passed 00:04:54.951 00:04:54.951 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.951 suites 1 1 n/a 0 0 00:04:54.951 tests 27 27 27 0 0 00:04:54.951 asserts 624 624 624 0 n/a 00:04:54.951 00:04:54.951 Elapsed time = 0.050 seconds 00:04:54.951 [2024-04-15 20:31:38.410870] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:04:54.951 [2024-04-15 20:31:38.410910] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:04:54.951 [2024-04-15 20:31:38.410976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:04:54.951 [2024-04-15 20:31:38.411005] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:04:54.951 [2024-04-15 20:31:38.411058] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:04:54.951 [2024-04-15 20:31:38.411090] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:04:54.951 [2024-04-15 20:31:38.411171] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:04:54.951 [2024-04-15 20:31:38.411196] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:04:54.951 20:31:38 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:04:55.217 00:04:55.217 00:04:55.217 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.217 http://cunit.sourceforge.net/ 00:04:55.217 00:04:55.217 00:04:55.217 Suite: nvme_transport 00:04:55.217 Test: test_nvme_get_transport ...passed 00:04:55.217 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:04:55.217 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:04:55.217 Test: test_nvme_transport_poll_group_add_remove ...passed 00:04:55.217 Test: test_ctrlr_get_memory_domains ...passed 00:04:55.217 00:04:55.217 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.217 suites 1 1 n/a 0 0 00:04:55.217 tests 5 5 5 0 0 00:04:55.217 asserts 28 28 28 0 n/a 00:04:55.217 00:04:55.217 Elapsed time = 0.000 seconds 00:04:55.217 20:31:38 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:04:55.217 00:04:55.217 00:04:55.217 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.217 http://cunit.sourceforge.net/ 00:04:55.217 00:04:55.217 00:04:55.217 Suite: nvme_io_msg 00:04:55.217 Test: test_nvme_io_msg_send ...passed 00:04:55.217 Test: test_nvme_io_msg_process ...passed 00:04:55.217 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:04:55.217 00:04:55.217 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.217 suites 1 1 n/a 0 0 00:04:55.217 tests 3 3 3 0 0 00:04:55.217 asserts 56 56 56 0 n/a 00:04:55.217 00:04:55.217 Elapsed time = 0.000 seconds 00:04:55.217 20:31:38 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:04:55.217 00:04:55.217 00:04:55.217 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.217 http://cunit.sourceforge.net/ 00:04:55.217 00:04:55.217 00:04:55.217 Suite: nvme_pcie_common 00:04:55.217 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-04-15 20:31:38.516713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:04:55.217 passed 00:04:55.217 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:04:55.217 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:04:55.217 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-04-15 20:31:38.517269] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:04:55.217 passed 00:04:55.217 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:04:55.217 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:04:55.217 00:04:55.217 [2024-04-15 20:31:38.517554] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:04:55.217 [2024-04-15 20:31:38.517601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:04:55.217 [2024-04-15 20:31:38.517987] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:55.217 [2024-04-15 20:31:38.518032] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:55.217 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.217 suites 1 1 n/a 0 0 00:04:55.217 tests 6 6 6 0 0 00:04:55.217 asserts 148 148 148 0 n/a 00:04:55.217 00:04:55.217 Elapsed time = 0.000 seconds 00:04:55.217 20:31:38 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:04:55.217 00:04:55.217 00:04:55.217 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.217 http://cunit.sourceforge.net/ 00:04:55.217 00:04:55.217 00:04:55.217 Suite: nvme_fabric 00:04:55.217 Test: test_nvme_fabric_prop_set_cmd ...passed 00:04:55.217 Test: test_nvme_fabric_prop_get_cmd ...passed 00:04:55.217 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:04:55.217 Test: test_nvme_fabric_discover_probe ...passed 00:04:55.217 Test: test_nvme_fabric_qpair_connect ...passed 00:04:55.217 00:04:55.217 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.217 suites 1 1 n/a 0 0 00:04:55.217 tests 5 5 5 0 0 00:04:55.217 asserts 60 60 60 0 n/a 00:04:55.217 00:04:55.217 Elapsed time = 0.000 seconds 00:04:55.217 [2024-04-15 20:31:38.547910] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:04:55.217 20:31:38 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:04:55.217 00:04:55.217 00:04:55.217 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.217 http://cunit.sourceforge.net/ 00:04:55.217 00:04:55.217 00:04:55.217 Suite: nvme_opal 00:04:55.217 Test: test_opal_nvme_security_recv_send_done ...passed 00:04:55.217 Test: test_opal_add_short_atom_header ...passed 00:04:55.217 00:04:55.217 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.217 suites 1 1 n/a 0 0 00:04:55.217 tests 2 2 2 0 0 00:04:55.217 asserts 22 22 22 0 n/a 00:04:55.217 00:04:55.217 Elapsed time = 0.000 seconds 00:04:55.217 [2024-04-15 20:31:38.577868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:04:55.217 ************************************ 00:04:55.217 END TEST unittest_nvme 00:04:55.217 ************************************ 00:04:55.217 00:04:55.217 real 0m0.903s 00:04:55.217 user 0m0.368s 00:04:55.217 sys 0m0.395s 00:04:55.217 20:31:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.217 20:31:38 -- common/autotest_common.sh@10 -- # set +x 00:04:55.217 20:31:38 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:04:55.217 20:31:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.217 20:31:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.217 20:31:38 -- common/autotest_common.sh@10 -- # set +x 00:04:55.217 ************************************ 00:04:55.217 START TEST unittest_log 00:04:55.217 ************************************ 00:04:55.217 20:31:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:04:55.217 00:04:55.217 00:04:55.217 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.217 http://cunit.sourceforge.net/ 00:04:55.217 00:04:55.217 00:04:55.217 Suite: log 00:04:55.217 Test: log_test ...[2024-04-15 20:31:38.676453] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:04:55.217 passed 00:04:55.217 Test: deprecation ...[2024-04-15 20:31:38.676891] log_ut.c: 55:log_test: *DEBUG*: log test 00:04:55.217 log dump test: 00:04:55.217 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:04:55.217 spdk dump test: 00:04:55.217 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:04:55.217 spdk dump test: 00:04:55.217 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:04:55.217 00000010 65 20 63 68 61 72 73 e chars 00:04:56.596 passed 00:04:56.596 00:04:56.596 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.596 suites 1 1 n/a 0 0 00:04:56.596 tests 2 2 2 0 0 00:04:56.596 asserts 73 73 73 0 n/a 00:04:56.596 00:04:56.596 Elapsed time = 0.000 seconds 00:04:56.596 ************************************ 00:04:56.596 END TEST unittest_log 00:04:56.596 ************************************ 00:04:56.596 00:04:56.596 real 0m1.038s 00:04:56.596 user 0m0.018s 00:04:56.596 sys 0m0.020s 00:04:56.596 20:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.596 20:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:56.596 20:31:39 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:04:56.596 20:31:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.596 20:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.596 20:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:56.596 ************************************ 00:04:56.596 START TEST unittest_lvol 00:04:56.596 ************************************ 00:04:56.596 20:31:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:04:56.596 00:04:56.596 00:04:56.596 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.596 http://cunit.sourceforge.net/ 00:04:56.596 00:04:56.596 00:04:56.596 Suite: lvol 00:04:56.596 Test: lvs_init_unload_success ...[2024-04-15 20:31:39.779452] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:04:56.596 passed 00:04:56.596 Test: lvs_init_destroy_success ...passed 00:04:56.596 Test: lvs_init_opts_success ...passed 00:04:56.596 Test: lvs_unload_lvs_is_null_fail ...passed 00:04:56.596 Test: lvs_names ...passed 00:04:56.596 Test: lvol_create_destroy_success ...passed 00:04:56.596 Test: lvol_create_fail ...passed 00:04:56.596 Test: lvol_destroy_fail ...passed 00:04:56.596 Test: lvol_close ...[2024-04-15 20:31:39.780211] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:04:56.596 [2024-04-15 20:31:39.780370] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:04:56.596 [2024-04-15 20:31:39.780443] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:04:56.596 [2024-04-15 20:31:39.780483] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:04:56.596 [2024-04-15 20:31:39.780591] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:04:56.596 [2024-04-15 20:31:39.780922] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:04:56.596 [2024-04-15 20:31:39.781019] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:04:56.596 [2024-04-15 20:31:39.781184] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:04:56.596 [2024-04-15 20:31:39.781334] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:04:56.596 [2024-04-15 20:31:39.781369] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:04:56.596 passed 00:04:56.596 Test: lvol_resize ...passed 00:04:56.596 Test: lvol_set_read_only ...passed 00:04:56.596 Test: test_lvs_load ...[2024-04-15 20:31:39.781776] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:04:56.596 [2024-04-15 20:31:39.781810] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:04:56.596 passed 00:04:56.596 Test: lvols_load ...[2024-04-15 20:31:39.781936] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:04:56.596 [2024-04-15 20:31:39.782012] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:04:56.596 passed 00:04:56.596 Test: lvol_open ...passed 00:04:56.596 Test: lvol_snapshot ...passed 00:04:56.596 Test: lvol_snapshot_fail ...passed 00:04:56.596 Test: lvol_clone ...[2024-04-15 20:31:39.782466] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:04:56.596 passed 00:04:56.596 Test: lvol_clone_fail ...passed 00:04:56.596 Test: lvol_iter_clones ...[2024-04-15 20:31:39.782827] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:04:56.596 passed 00:04:56.596 Test: lvol_refcnt ...[2024-04-15 20:31:39.783190] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol c92b0e4a-ce55-4e55-8368-37c1b9ec4f36 because it is still open 00:04:56.596 passed 00:04:56.596 Test: lvol_names ...[2024-04-15 20:31:39.783331] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:04:56.596 [2024-04-15 20:31:39.783411] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:04:56.596 passed 00:04:56.596 Test: lvol_create_thin_provisioned ...[2024-04-15 20:31:39.783538] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:04:56.596 passed 00:04:56.596 Test: lvol_rename ...[2024-04-15 20:31:39.783862] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:04:56.596 [2024-04-15 20:31:39.783926] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:04:56.596 passed 00:04:56.596 Test: lvs_rename ...passed 00:04:56.596 Test: lvol_inflate ...[2024-04-15 20:31:39.784072] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:04:56.596 passed 00:04:56.596 Test: lvol_decouple_parent ...passed 00:04:56.596 Test: lvol_get_xattr ...[2024-04-15 20:31:39.784207] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:04:56.596 [2024-04-15 20:31:39.784358] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:04:56.596 passed 00:04:56.596 Test: lvol_esnap_reload ...passed 00:04:56.596 Test: lvol_esnap_create_bad_args ...[2024-04-15 20:31:39.784694] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:04:56.596 [2024-04-15 20:31:39.784721] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:04:56.596 [2024-04-15 20:31:39.784762] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:04:56.596 [2024-04-15 20:31:39.784864] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:04:56.596 passed 00:04:56.596 Test: lvol_esnap_create_delete ...[2024-04-15 20:31:39.784968] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:04:56.596 passed 00:04:56.596 Test: lvol_esnap_load_esnaps ...passed 00:04:56.596 Test: lvol_esnap_missing ...passed 00:04:56.596 Test: lvol_esnap_hotplug ...[2024-04-15 20:31:39.785192] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:04:56.596 [2024-04-15 20:31:39.785337] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:04:56.596 [2024-04-15 20:31:39.785375] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:04:56.596 00:04:56.596 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:04:56.596 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:04:56.597 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:04:56.597 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:04:56.597 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:04:56.597 [2024-04-15 20:31:39.785851] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 1da4522e-76d8-4d00-8c50-66e08a293f8d: failed to create esnap bs_dev: error -12 00:04:56.597 [2024-04-15 20:31:39.786068] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 2ada8c80-c1d5-44a6-8746-69436471adc0: failed to create esnap bs_dev: error -12 00:04:56.597 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:04:56.597 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:04:56.597 [2024-04-15 20:31:39.786185] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 81d20c1c-75fd-431d-b416-127fa255ad22: failed to create esnap bs_dev: error -12 00:04:56.597 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:04:56.597 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:04:56.597 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:04:56.597 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:04:56.597 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:04:56.597 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:04:56.597 passed 00:04:56.597 Test: lvol_get_by ...passed 00:04:56.597 00:04:56.597 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.597 suites 1 1 n/a 0 0 00:04:56.597 tests 34 34 34 0 0 00:04:56.597 asserts 1439 1439 1439 0 n/a 00:04:56.597 00:04:56.597 Elapsed time = 0.000 seconds 00:04:56.597 ************************************ 00:04:56.597 END TEST unittest_lvol 00:04:56.597 ************************************ 00:04:56.597 00:04:56.597 real 0m0.046s 00:04:56.597 user 0m0.028s 00:04:56.597 sys 0m0.019s 00:04:56.597 20:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.597 20:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:56.597 20:31:39 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:56.597 20:31:39 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:04:56.597 20:31:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.597 20:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.597 20:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:56.597 ************************************ 00:04:56.597 START TEST unittest_nvme_rdma 00:04:56.597 ************************************ 00:04:56.597 20:31:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:04:56.597 00:04:56.597 00:04:56.597 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.597 http://cunit.sourceforge.net/ 00:04:56.597 00:04:56.597 00:04:56.597 Suite: nvme_rdma 00:04:56.597 Test: test_nvme_rdma_build_sgl_request ...[2024-04-15 20:31:39.884398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:04:56.597 [2024-04-15 20:31:39.884756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:04:56.597 [2024-04-15 20:31:39.884857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:04:56.597 passed 00:04:56.597 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:04:56.597 Test: test_nvme_rdma_build_contig_request ...passed 00:04:56.597 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:04:56.597 Test: test_nvme_rdma_create_reqs ...passed 00:04:56.597 Test: test_nvme_rdma_create_rsps ...[2024-04-15 20:31:39.884956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:04:56.597 [2024-04-15 20:31:39.885094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:04:56.597 passed 00:04:56.597 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:04:56.597 Test: test_nvme_rdma_poller_create ...[2024-04-15 20:31:39.885500] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:04:56.597 [2024-04-15 20:31:39.885626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:04:56.597 [2024-04-15 20:31:39.885704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:04:56.597 passed 00:04:56.597 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:04:56.597 Test: test_nvme_rdma_ctrlr_construct ...passed 00:04:56.597 Test: test_nvme_rdma_req_put_and_get ...passed 00:04:56.597 Test: test_nvme_rdma_req_init ...passed 00:04:56.597 Test: test_nvme_rdma_validate_cm_event ...[2024-04-15 20:31:39.885857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:04:56.597 passed 00:04:56.597 Test: test_nvme_rdma_qpair_init ...passed 00:04:56.597 Test: test_nvme_rdma_qpair_submit_request ...passed 00:04:56.597 Test: test_nvme_rdma_memory_domain ...passed 00:04:56.597 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:04:56.597 Test: test_rdma_get_memory_translation ...[2024-04-15 20:31:39.886182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:04:56.597 [2024-04-15 20:31:39.886221] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:04:56.597 [2024-04-15 20:31:39.886361] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:04:56.597 [2024-04-15 20:31:39.886442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:04:56.597 passed 00:04:56.597 Test: test_get_rdma_qpair_from_wc ...passed 00:04:56.597 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:04:56.597 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:04:56.597 Test: test_nvme_rdma_qpair_set_poller ...[2024-04-15 20:31:39.886508] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:04:56.597 [2024-04-15 20:31:39.886611] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:56.597 [2024-04-15 20:31:39.886668] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:56.597 [2024-04-15 20:31:39.886793] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:04:56.597 [2024-04-15 20:31:39.886850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:04:56.597 [2024-04-15 20:31:39.886883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffdcbc11170 on poll group 0x60b0000001a0 00:04:56.597 [2024-04-15 20:31:39.886943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:04:56.597 [2024-04-15 20:31:39.886991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:04:56.597 [2024-04-15 20:31:39.887023] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffdcbc11170 on poll group 0x60b0000001a0 00:04:56.597 passed 00:04:56.597 00:04:56.597 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.597 suites 1 1 n/a 0 0 00:04:56.597 tests 22 22 22 0 0 00:04:56.597 asserts 412 412 412 0 n/a 00:04:56.597 00:04:56.597 Elapsed time = 0.000 seconds 00:04:56.597 [2024-04-15 20:31:39.887082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:04:56.597 00:04:56.597 real 0m0.041s 00:04:56.597 user 0m0.021s 00:04:56.597 sys 0m0.021s 00:04:56.597 20:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.597 20:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:56.597 ************************************ 00:04:56.597 END TEST unittest_nvme_rdma 00:04:56.597 ************************************ 00:04:56.597 20:31:39 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:04:56.597 20:31:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.597 20:31:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.597 20:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:56.597 ************************************ 00:04:56.597 START TEST unittest_nvmf_transport 00:04:56.597 ************************************ 00:04:56.597 20:31:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:04:56.597 00:04:56.597 00:04:56.597 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.597 http://cunit.sourceforge.net/ 00:04:56.597 00:04:56.597 00:04:56.597 Suite: nvmf 00:04:56.597 Test: test_spdk_nvmf_transport_create ...[2024-04-15 20:31:39.984498] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:04:56.597 passed 00:04:56.597 Test: test_nvmf_transport_poll_group_create ...passed 00:04:56.597 Test: test_spdk_nvmf_transport_opts_init ...[2024-04-15 20:31:39.984789] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:04:56.597 [2024-04-15 20:31:39.984826] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:04:56.597 [2024-04-15 20:31:39.984940] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:04:56.597 [2024-04-15 20:31:39.985072] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:04:56.597 passed 00:04:56.597 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:04:56.597 00:04:56.597 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.597 suites 1 1 n/a 0 0 00:04:56.597 tests 4 4 4 0 0 00:04:56.597 asserts 49 49 49 0 n/a 00:04:56.597 00:04:56.597 Elapsed time = 0.000 seconds 00:04:56.597 [2024-04-15 20:31:39.985182] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:04:56.598 [2024-04-15 20:31:39.985216] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:04:56.598 00:04:56.598 real 0m0.038s 00:04:56.598 user 0m0.022s 00:04:56.598 sys 0m0.017s 00:04:56.598 20:31:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.598 20:31:39 -- common/autotest_common.sh@10 -- # set +x 00:04:56.598 ************************************ 00:04:56.598 END TEST unittest_nvmf_transport 00:04:56.598 ************************************ 00:04:56.598 20:31:40 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:04:56.598 20:31:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.598 20:31:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.598 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:56.598 ************************************ 00:04:56.598 START TEST unittest_rdma 00:04:56.598 ************************************ 00:04:56.598 20:31:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:04:56.598 00:04:56.598 00:04:56.598 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.598 http://cunit.sourceforge.net/ 00:04:56.598 00:04:56.598 00:04:56.598 Suite: rdma_common 00:04:56.598 Test: test_spdk_rdma_pd ...passed 00:04:56.598 00:04:56.598 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.598 suites 1 1 n/a 0 0 00:04:56.598 tests 1 1 1 0 0 00:04:56.598 asserts 31 31 31 0 n/a 00:04:56.598 00:04:56.598 Elapsed time = 0.000 seconds 00:04:56.598 [2024-04-15 20:31:40.082042] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:04:56.598 [2024-04-15 20:31:40.082383] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:04:56.858 ************************************ 00:04:56.858 END TEST unittest_rdma 00:04:56.858 ************************************ 00:04:56.858 00:04:56.858 real 0m0.037s 00:04:56.858 user 0m0.020s 00:04:56.858 sys 0m0.017s 00:04:56.858 20:31:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.858 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:56.858 20:31:40 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:56.858 20:31:40 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:04:56.858 20:31:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.858 20:31:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.858 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:56.858 ************************************ 00:04:56.858 START TEST unittest_nvme_cuse 00:04:56.858 ************************************ 00:04:56.858 20:31:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:04:56.858 00:04:56.858 00:04:56.858 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.858 http://cunit.sourceforge.net/ 00:04:56.858 00:04:56.858 00:04:56.858 Suite: nvme_cuse 00:04:56.858 Test: test_cuse_nvme_submit_io_read_write ...passed 00:04:56.858 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:04:56.858 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:04:56.858 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:04:56.858 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:04:56.858 Test: test_cuse_nvme_submit_io ...passed 00:04:56.858 Test: test_cuse_nvme_reset ...passed 00:04:56.858 Test: test_nvme_cuse_stop ...passed 00:04:56.858 Test: test_spdk_nvme_cuse_get_ctrlr_name ...[2024-04-15 20:31:40.182273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:04:56.858 [2024-04-15 20:31:40.182532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:04:56.858 passed 00:04:56.858 00:04:56.858 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.858 suites 1 1 n/a 0 0 00:04:56.858 tests 9 9 9 0 0 00:04:56.858 asserts 121 121 121 0 n/a 00:04:56.858 00:04:56.858 Elapsed time = 0.000 seconds 00:04:56.858 ************************************ 00:04:56.858 END TEST unittest_nvme_cuse 00:04:56.858 ************************************ 00:04:56.858 00:04:56.858 real 0m0.038s 00:04:56.858 user 0m0.015s 00:04:56.858 sys 0m0.024s 00:04:56.858 20:31:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.858 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:56.858 20:31:40 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:04:56.858 20:31:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.858 20:31:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.858 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:56.858 ************************************ 00:04:56.858 START TEST unittest_nvmf 00:04:56.858 ************************************ 00:04:56.858 20:31:40 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:04:56.858 20:31:40 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:04:56.858 00:04:56.858 00:04:56.858 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.858 http://cunit.sourceforge.net/ 00:04:56.858 00:04:56.858 00:04:56.858 Suite: nvmf 00:04:56.858 Test: test_get_log_page ...passed 00:04:56.858 Test: test_process_fabrics_cmd ...passed 00:04:56.858 Test: test_connect ...[2024-04-15 20:31:40.276976] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:04:56.858 [2024-04-15 20:31:40.277972] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:04:56.858 [2024-04-15 20:31:40.278094] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:04:56.858 [2024-04-15 20:31:40.278154] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:04:56.858 [2024-04-15 20:31:40.278194] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:04:56.858 [2024-04-15 20:31:40.278292] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:04:56.858 [2024-04-15 20:31:40.278332] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:04:56.858 [2024-04-15 20:31:40.278449] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:04:56.858 [2024-04-15 20:31:40.278494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:04:56.858 [2024-04-15 20:31:40.278556] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:04:56.858 [2024-04-15 20:31:40.278609] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:04:56.858 [2024-04-15 20:31:40.278733] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:04:56.858 [2024-04-15 20:31:40.278772] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:04:56.858 [2024-04-15 20:31:40.278833] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:04:56.858 [2024-04-15 20:31:40.278875] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:04:56.858 [2024-04-15 20:31:40.278947] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:04:56.858 [2024-04-15 20:31:40.279000] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:04:56.858 passed 00:04:56.858 Test: test_get_ns_id_desc_list ...passed 00:04:56.858 Test: test_identify_ns ...[2024-04-15 20:31:40.279143] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:04:56.858 passed 00:04:56.858 Test: test_identify_ns_iocs_specific ...passed 00:04:56.858 Test: test_reservation_write_exclusive ...passed 00:04:56.858 Test: test_reservation_exclusive_access ...passed 00:04:56.858 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:04:56.858 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:04:56.858 Test: test_reservation_notification_log_page ...passed 00:04:56.858 Test: test_get_dif_ctx ...passed 00:04:56.858 Test: test_set_get_features ...passed 00:04:56.858 Test: test_identify_ctrlr ...passed 00:04:56.858 Test: test_identify_ctrlr_iocs_specific ...passed 00:04:56.858 Test: test_custom_admin_cmd ...passed 00:04:56.858 Test: test_fused_compare_and_write ...passed 00:04:56.858 Test: test_multi_async_event_reqs ...passed 00:04:56.858 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:04:56.858 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:04:56.858 Test: test_multi_async_events ...passed 00:04:56.858 Test: test_rae ...passed 00:04:56.858 Test: test_nvmf_ctrlr_create_destruct ...passed 00:04:56.858 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:04:56.858 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:04:56.858 Test: test_zcopy_read ...passed 00:04:56.858 Test: test_zcopy_write ...[2024-04-15 20:31:40.279268] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:04:56.858 [2024-04-15 20:31:40.279359] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:04:56.858 [2024-04-15 20:31:40.279447] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:04:56.858 [2024-04-15 20:31:40.279599] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:04:56.858 [2024-04-15 20:31:40.280159] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:04:56.858 [2024-04-15 20:31:40.280186] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:04:56.858 [2024-04-15 20:31:40.280217] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:04:56.858 [2024-04-15 20:31:40.280262] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:04:56.858 [2024-04-15 20:31:40.280557] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:04:56.858 [2024-04-15 20:31:40.280589] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:04:56.858 [2024-04-15 20:31:40.280621] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:04:56.858 [2024-04-15 20:31:40.280906] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:04:56.858 passed 00:04:56.858 Test: test_nvmf_property_set ...passed 00:04:56.858 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-04-15 20:31:40.281027] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:04:56.858 passed 00:04:56.858 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:04:56.858 00:04:56.858 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.858 suites 1 1 n/a 0 0 00:04:56.858 tests 30 30 30 0 0 00:04:56.858 asserts 885 885 885 0 n/a 00:04:56.858 00:04:56.858 Elapsed time = 0.000 seconds 00:04:56.858 [2024-04-15 20:31:40.281072] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:04:56.858 [2024-04-15 20:31:40.281105] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:04:56.858 [2024-04-15 20:31:40.281140] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:04:56.858 [2024-04-15 20:31:40.281165] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:04:56.858 20:31:40 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:04:56.858 00:04:56.858 00:04:56.858 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.858 http://cunit.sourceforge.net/ 00:04:56.858 00:04:56.858 00:04:56.858 Suite: nvmf 00:04:56.858 Test: test_get_rw_params ...passed 00:04:56.858 Test: test_lba_in_range ...passed 00:04:56.858 Test: test_get_dif_ctx ...passed 00:04:56.858 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:04:56.858 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-04-15 20:31:40.319326] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:04:56.858 [2024-04-15 20:31:40.319542] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:04:56.858 [2024-04-15 20:31:40.319625] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:04:56.858 passed 00:04:56.858 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:04:56.858 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:04:56.858 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:04:56.858 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:04:56.858 00:04:56.858 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.858 suites 1 1 n/a 0 0 00:04:56.858 tests 9 9 9 0 0 00:04:56.858 asserts 157 157 157 0 n/a 00:04:56.858 00:04:56.858 Elapsed time = 0.000 seconds 00:04:56.858 [2024-04-15 20:31:40.319884] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:04:56.858 [2024-04-15 20:31:40.319970] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:04:56.858 [2024-04-15 20:31:40.320059] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:04:56.858 [2024-04-15 20:31:40.320091] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:04:56.858 [2024-04-15 20:31:40.320152] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:04:56.858 [2024-04-15 20:31:40.320183] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:04:56.858 20:31:40 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:04:56.858 00:04:56.858 00:04:56.858 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.858 http://cunit.sourceforge.net/ 00:04:56.858 00:04:56.858 00:04:56.858 Suite: nvmf 00:04:56.858 Test: test_discovery_log ...passed 00:04:57.119 Test: test_discovery_log_with_filters ...passed 00:04:57.119 00:04:57.119 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.119 suites 1 1 n/a 0 0 00:04:57.119 tests 2 2 2 0 0 00:04:57.119 asserts 238 238 238 0 n/a 00:04:57.119 00:04:57.119 Elapsed time = 0.010 seconds 00:04:57.119 20:31:40 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:04:57.119 00:04:57.119 00:04:57.119 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.119 http://cunit.sourceforge.net/ 00:04:57.119 00:04:57.119 00:04:57.119 Suite: nvmf 00:04:57.119 Test: nvmf_test_create_subsystem ...[2024-04-15 20:31:40.393616] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:04:57.119 [2024-04-15 20:31:40.393838] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:04:57.119 [2024-04-15 20:31:40.393916] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:04:57.119 [2024-04-15 20:31:40.393952] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:04:57.119 [2024-04-15 20:31:40.393982] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:04:57.119 passed 00:04:57.119 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-04-15 20:31:40.394014] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:04:57.119 [2024-04-15 20:31:40.394060] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:04:57.119 [2024-04-15 20:31:40.394176] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:04:57.119 [2024-04-15 20:31:40.394232] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:04:57.119 [2024-04-15 20:31:40.394267] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:04:57.119 [2024-04-15 20:31:40.394293] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:04:57.119 [2024-04-15 20:31:40.394483] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:04:57.119 passed 00:04:57.119 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:04:57.119 Test: test_reservation_register ...[2024-04-15 20:31:40.394583] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1734:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:04:57.119 [2024-04-15 20:31:40.394849] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:57.119 passed 00:04:57.119 Test: test_reservation_register_with_ptpl ...[2024-04-15 20:31:40.394933] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2841:nvmf_ns_reservation_register: *ERROR*: No registrant 00:04:57.119 passed 00:04:57.119 Test: test_reservation_acquire_preempt_1 ...passed 00:04:57.119 Test: test_reservation_acquire_release_with_ptpl ...[2024-04-15 20:31:40.395698] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:57.119 passed 00:04:57.119 Test: test_reservation_release ...passed 00:04:57.119 Test: test_reservation_unregister_notification ...passed 00:04:57.119 Test: test_reservation_release_notification ...[2024-04-15 20:31:40.397136] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:57.119 [2024-04-15 20:31:40.397262] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:57.119 [2024-04-15 20:31:40.397422] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:57.119 passed 00:04:57.119 Test: test_reservation_release_notification_write_exclusive ...passed 00:04:57.119 Test: test_reservation_clear_notification ...passed 00:04:57.119 Test: test_reservation_preempt_notification ...passed 00:04:57.119 Test: test_spdk_nvmf_ns_event ...[2024-04-15 20:31:40.397550] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:57.119 [2024-04-15 20:31:40.397690] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:57.119 [2024-04-15 20:31:40.397814] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:57.119 passed 00:04:57.119 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:04:57.120 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:04:57.120 Test: test_spdk_nvmf_subsystem_add_host ...[2024-04-15 20:31:40.398215] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:04:57.120 passed 00:04:57.120 Test: test_nvmf_ns_reservation_report ...passed 00:04:57.120 Test: test_nvmf_nqn_is_valid ...[2024-04-15 20:31:40.398285] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:04:57.120 [2024-04-15 20:31:40.398381] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3146:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:04:57.120 [2024-04-15 20:31:40.398458] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:04:57.120 [2024-04-15 20:31:40.398495] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:6a5e8260-0a28-4e7a-9d10-9e07fcd6dca": uuid is not the correct length 00:04:57.120 passed 00:04:57.120 Test: test_nvmf_ns_reservation_restore ...[2024-04-15 20:31:40.398530] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:04:57.120 [2024-04-15 20:31:40.398674] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2340:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:04:57.120 passed 00:04:57.120 Test: test_nvmf_subsystem_state_change ...passed 00:04:57.120 Test: test_nvmf_reservation_custom_ops ...passed 00:04:57.120 00:04:57.120 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.120 suites 1 1 n/a 0 0 00:04:57.120 tests 22 22 22 0 0 00:04:57.120 asserts 405 405 405 0 n/a 00:04:57.120 00:04:57.120 Elapsed time = 0.010 seconds 00:04:57.120 20:31:40 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:04:57.120 00:04:57.120 00:04:57.120 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.120 http://cunit.sourceforge.net/ 00:04:57.120 00:04:57.120 00:04:57.120 Suite: nvmf 00:04:57.120 Test: test_nvmf_tcp_create ...[2024-04-15 20:31:40.444492] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:04:57.120 passed 00:04:57.120 Test: test_nvmf_tcp_destroy ...passed 00:04:57.120 Test: test_nvmf_tcp_poll_group_create ...passed 00:04:57.120 Test: test_nvmf_tcp_send_c2h_data ...passed 00:04:57.120 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:04:57.120 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:04:57.120 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:04:57.120 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:04:57.120 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:04:57.120 Test: test_nvmf_tcp_icreq_handle ...[2024-04-15 20:31:40.555288] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.555366] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b998c40 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.555425] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b998c40 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.555450] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.555467] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b998c40 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.555519] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:04:57.120 [2024-04-15 20:31:40.555586] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.555618] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b998c40 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.555634] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:04:57.120 passed 00:04:57.120 Test: test_nvmf_tcp_check_xfer_type ...passed 00:04:57.120 Test: test_nvmf_tcp_invalid_sgl ...passed 00:04:57.120 Test: test_nvmf_tcp_pdu_ch_handle ...passed 00:04:57.120 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-04-15 20:31:40.555857] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b998c40 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.555887] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.555915] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b998c40 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556014] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556049] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b998c40 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556098] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:04:57.120 [2024-04-15 20:31:40.556124] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556151] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b998c40 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556193] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc8b9999a0 00:04:57.120 [2024-04-15 20:31:40.556254] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556287] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b999100 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556313] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc8b999100 00:04:57.120 [2024-04-15 20:31:40.556334] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556360] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b999100 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556379] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:04:57.120 [2024-04-15 20:31:40.556406] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556447] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b999100 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556481] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:04:57.120 [2024-04-15 20:31:40.556503] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556527] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b999100 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556565] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556589] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b999100 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556627] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556646] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b999100 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556688] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556711] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b999100 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556755] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556780] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b999100 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556826] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556847] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b999100 is same with the state(5) to be set 00:04:57.120 [2024-04-15 20:31:40.556870] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:04:57.120 [2024-04-15 20:31:40.556887] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc8b999100 is same with the state(5) to be set 00:04:57.120 passed 00:04:57.120 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:04:57.120 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed 00:04:57.120 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:04:57.120 00:04:57.120 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.120 suites 1 1 n/a 0 0 00:04:57.120 tests 17 17 17 0 0 00:04:57.120 asserts 222 222 222 0 n/a 00:04:57.120 00:04:57.120 Elapsed time = 0.150 seconds 00:04:57.120 [2024-04-15 20:31:40.571420] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:04:57.120 [2024-04-15 20:31:40.571464] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:04:57.120 [2024-04-15 20:31:40.571616] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:04:57.120 [2024-04-15 20:31:40.571635] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:04:57.120 [2024-04-15 20:31:40.571727] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:04:57.120 [2024-04-15 20:31:40.571746] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:04:57.380 20:31:40 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:04:57.380 00:04:57.380 00:04:57.380 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.380 http://cunit.sourceforge.net/ 00:04:57.380 00:04:57.380 00:04:57.380 Suite: nvmf 00:04:57.380 Test: test_nvmf_tgt_create_poll_group ...passed 00:04:57.380 00:04:57.380 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.380 suites 1 1 n/a 0 0 00:04:57.380 tests 1 1 1 0 0 00:04:57.380 asserts 17 17 17 0 n/a 00:04:57.380 00:04:57.380 Elapsed time = 0.030 seconds 00:04:57.380 ************************************ 00:04:57.380 END TEST unittest_nvmf 00:04:57.380 ************************************ 00:04:57.380 00:04:57.380 real 0m0.490s 00:04:57.380 user 0m0.211s 00:04:57.380 sys 0m0.279s 00:04:57.380 20:31:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.380 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:57.380 20:31:40 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:57.380 20:31:40 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:57.380 20:31:40 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:04:57.380 20:31:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.380 20:31:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.380 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:57.380 ************************************ 00:04:57.380 START TEST unittest_nvmf_rdma 00:04:57.380 ************************************ 00:04:57.380 20:31:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:04:57.380 00:04:57.380 00:04:57.380 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.380 http://cunit.sourceforge.net/ 00:04:57.380 00:04:57.380 00:04:57.380 Suite: nvmf 00:04:57.380 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-04-15 20:31:40.840245] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:04:57.380 [2024-04-15 20:31:40.840545] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:04:57.380 [2024-04-15 20:31:40.840585] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:04:57.380 passed 00:04:57.380 Test: test_spdk_nvmf_rdma_request_process ...passed 00:04:57.380 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:04:57.380 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:04:57.380 Test: test_nvmf_rdma_opts_init ...passed 00:04:57.380 Test: test_nvmf_rdma_request_free_data ...passed 00:04:57.380 Test: test_nvmf_rdma_update_ibv_state ...passed 00:04:57.380 Test: test_nvmf_rdma_resources_create ...[2024-04-15 20:31:40.841542] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:04:57.380 [2024-04-15 20:31:40.841584] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:04:57.380 passed 00:04:57.380 Test: test_nvmf_rdma_qpair_compare ...passed 00:04:57.380 Test: test_nvmf_rdma_resize_cq ...[2024-04-15 20:31:40.844623] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:04:57.380 Using CQ of insufficient size may lead to CQ overrun 00:04:57.380 passed 00:04:57.380 00:04:57.380 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.380 suites 1 1 n/a 0 0 00:04:57.380 tests 10 10 10 0 0 00:04:57.380 asserts 584 584 584 0 n/a 00:04:57.380 00:04:57.380 Elapsed time = 0.010 seconds 00:04:57.380 [2024-04-15 20:31:40.844879] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:04:57.380 [2024-04-15 20:31:40.845614] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:04:57.380 ************************************ 00:04:57.380 END TEST unittest_nvmf_rdma 00:04:57.380 ************************************ 00:04:57.380 00:04:57.380 real 0m0.048s 00:04:57.380 user 0m0.017s 00:04:57.380 sys 0m0.031s 00:04:57.380 20:31:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.380 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:57.640 20:31:40 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:57.640 20:31:40 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:04:57.640 20:31:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.640 20:31:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.640 20:31:40 -- common/autotest_common.sh@10 -- # set +x 00:04:57.640 ************************************ 00:04:57.640 START TEST unittest_scsi 00:04:57.640 ************************************ 00:04:57.640 20:31:40 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:04:57.640 20:31:40 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:04:57.640 00:04:57.640 00:04:57.640 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.640 http://cunit.sourceforge.net/ 00:04:57.640 00:04:57.640 00:04:57.640 Suite: dev_suite 00:04:57.640 Test: dev_destruct_null_dev ...passed 00:04:57.640 Test: dev_destruct_zero_luns ...passed 00:04:57.640 Test: dev_destruct_null_lun ...passed 00:04:57.640 Test: dev_destruct_success ...passed 00:04:57.640 Test: dev_construct_num_luns_zero ...passed 00:04:57.640 Test: dev_construct_no_lun_zero ...passed 00:04:57.640 Test: dev_construct_null_lun ...passed 00:04:57.640 Test: dev_construct_name_too_long ...passed 00:04:57.640 Test: dev_construct_success ...passed 00:04:57.640 Test: dev_construct_success_lun_zero_not_first ...passed 00:04:57.640 Test: dev_queue_mgmt_task_success ...passed 00:04:57.640 Test: dev_queue_task_success ...passed 00:04:57.640 Test: dev_stop_success ...passed 00:04:57.640 Test: dev_add_port_max_ports ...passed 00:04:57.640 Test: dev_add_port_construct_failure1 ...[2024-04-15 20:31:40.953993] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:04:57.640 [2024-04-15 20:31:40.954212] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:04:57.640 [2024-04-15 20:31:40.954244] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:04:57.640 [2024-04-15 20:31:40.954286] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:04:57.640 [2024-04-15 20:31:40.954480] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:04:57.640 [2024-04-15 20:31:40.954568] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:04:57.640 passed 00:04:57.640 Test: dev_add_port_construct_failure2 ...passed 00:04:57.640 Test: dev_add_port_success1 ...passed 00:04:57.640 Test: dev_add_port_success2 ...passed 00:04:57.640 Test: dev_add_port_success3 ...passed 00:04:57.640 Test: dev_find_port_by_id_num_ports_zero ...passed 00:04:57.640 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:04:57.640 Test: dev_find_port_by_id_success ...passed 00:04:57.640 Test: dev_add_lun_bdev_not_found ...passed 00:04:57.640 Test: dev_add_lun_no_free_lun_id ...passed 00:04:57.640 Test: dev_add_lun_success1 ...passed 00:04:57.640 Test: dev_add_lun_success2 ...passed 00:04:57.640 Test: dev_check_pending_tasks ...passed 00:04:57.640 Test: dev_iterate_luns ...passed 00:04:57.640 Test: dev_find_free_lun ...passed 00:04:57.640 00:04:57.640 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.640 suites 1 1 n/a 0 0 00:04:57.640 tests 29 29 29 0 0 00:04:57.640 asserts 97 97 97 0 n/a 00:04:57.640 00:04:57.640 Elapsed time = 0.000 seconds 00:04:57.640 [2024-04-15 20:31:40.954861] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:04:57.641 [2024-04-15 20:31:40.955237] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:04:57.641 20:31:40 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:04:57.641 00:04:57.641 00:04:57.641 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.641 http://cunit.sourceforge.net/ 00:04:57.641 00:04:57.641 00:04:57.641 Suite: lun_suite 00:04:57.641 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:04:57.641 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-04-15 20:31:40.986830] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:04:57.641 [2024-04-15 20:31:40.987095] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:04:57.641 passed 00:04:57.641 Test: lun_task_mgmt_execute_lun_reset ...passed 00:04:57.641 Test: lun_task_mgmt_execute_target_reset ...passed 00:04:57.641 Test: lun_task_mgmt_execute_invalid_case ...passed 00:04:57.641 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:04:57.641 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:04:57.641 Test: lun_append_task_null_lun_not_supported ...passed 00:04:57.641 Test: lun_execute_scsi_task_pending ...passed 00:04:57.641 Test: lun_execute_scsi_task_complete ...passed 00:04:57.641 Test: lun_execute_scsi_task_resize ...passed 00:04:57.641 Test: lun_destruct_success ...passed 00:04:57.641 Test: lun_construct_null_ctx ...passed 00:04:57.641 Test: lun_construct_success ...passed 00:04:57.641 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:04:57.641 Test: lun_reset_task_suspend_scsi_task ...passed 00:04:57.641 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:04:57.641 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:04:57.641 00:04:57.641 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.641 suites 1 1 n/a 0 0 00:04:57.641 tests 18 18 18 0 0 00:04:57.641 asserts 153 153 153 0 n/a 00:04:57.641 00:04:57.641 Elapsed time = 0.010 seconds 00:04:57.641 [2024-04-15 20:31:40.987212] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:04:57.641 [2024-04-15 20:31:40.987342] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:04:57.641 20:31:41 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:04:57.641 00:04:57.641 00:04:57.641 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.641 http://cunit.sourceforge.net/ 00:04:57.641 00:04:57.641 00:04:57.641 Suite: scsi_suite 00:04:57.641 Test: scsi_init ...passed 00:04:57.641 00:04:57.641 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.641 suites 1 1 n/a 0 0 00:04:57.641 tests 1 1 1 0 0 00:04:57.641 asserts 1 1 1 0 n/a 00:04:57.641 00:04:57.641 Elapsed time = 0.000 seconds 00:04:57.641 20:31:41 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:04:57.641 00:04:57.641 00:04:57.641 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.641 http://cunit.sourceforge.net/ 00:04:57.641 00:04:57.641 00:04:57.641 Suite: translation_suite 00:04:57.641 Test: mode_select_6_test ...passed 00:04:57.641 Test: mode_select_6_test2 ...passed 00:04:57.641 Test: mode_sense_6_test ...passed 00:04:57.641 Test: mode_sense_10_test ...passed 00:04:57.641 Test: inquiry_evpd_test ...passed 00:04:57.641 Test: inquiry_standard_test ...passed 00:04:57.641 Test: inquiry_overflow_test ...passed 00:04:57.641 Test: task_complete_test ...passed 00:04:57.641 Test: lba_range_test ...passed 00:04:57.641 Test: xfer_len_test ...[2024-04-15 20:31:41.044555] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:04:57.641 passed 00:04:57.641 Test: xfer_test ...passed 00:04:57.641 Test: scsi_name_padding_test ...passed 00:04:57.641 Test: get_dif_ctx_test ...passed 00:04:57.641 Test: unmap_split_test ...passed 00:04:57.641 00:04:57.641 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.641 suites 1 1 n/a 0 0 00:04:57.641 tests 14 14 14 0 0 00:04:57.641 asserts 1200 1200 1200 0 n/a 00:04:57.641 00:04:57.641 Elapsed time = 0.000 seconds 00:04:57.641 20:31:41 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:04:57.641 00:04:57.641 00:04:57.641 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.641 http://cunit.sourceforge.net/ 00:04:57.641 00:04:57.641 00:04:57.641 Suite: reservation_suite 00:04:57.641 Test: test_reservation_register ...passed 00:04:57.641 Test: test_reservation_reserve ...[2024-04-15 20:31:41.072140] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:57.641 [2024-04-15 20:31:41.072451] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:57.641 passed 00:04:57.641 Test: test_reservation_preempt_non_all_regs ...[2024-04-15 20:31:41.072513] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:04:57.641 [2024-04-15 20:31:41.072614] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:04:57.641 [2024-04-15 20:31:41.072709] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:57.641 [2024-04-15 20:31:41.072784] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:04:57.641 passed 00:04:57.641 Test: test_reservation_preempt_all_regs ...passed 00:04:57.641 Test: test_reservation_cmds_conflict ...[2024-04-15 20:31:41.072922] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:57.641 [2024-04-15 20:31:41.072979] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:57.641 [2024-04-15 20:31:41.073014] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:04:57.641 [2024-04-15 20:31:41.073048] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:04:57.641 [2024-04-15 20:31:41.073072] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:04:57.641 passed 00:04:57.641 Test: test_scsi2_reserve_release ...passed 00:04:57.641 Test: test_pr_with_scsi2_reserve_release ...passed 00:04:57.641 00:04:57.641 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.641 suites 1 1 n/a 0 0 00:04:57.641 tests 7 7 7 0 0 00:04:57.641 asserts 257 257 257 0 n/a 00:04:57.641 00:04:57.641 Elapsed time = 0.010 seconds 00:04:57.641 [2024-04-15 20:31:41.073095] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:04:57.641 [2024-04-15 20:31:41.073114] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:04:57.641 [2024-04-15 20:31:41.073172] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:57.641 ************************************ 00:04:57.641 END TEST unittest_scsi 00:04:57.641 ************************************ 00:04:57.641 00:04:57.641 real 0m0.155s 00:04:57.641 user 0m0.081s 00:04:57.641 sys 0m0.076s 00:04:57.641 20:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.641 20:31:41 -- common/autotest_common.sh@10 -- # set +x 00:04:57.641 20:31:41 -- unit/unittest.sh@276 -- # uname -s 00:04:57.641 20:31:41 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:04:57.641 20:31:41 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:04:57.641 20:31:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.641 20:31:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.641 20:31:41 -- common/autotest_common.sh@10 -- # set +x 00:04:57.900 ************************************ 00:04:57.900 START TEST unittest_sock 00:04:57.900 ************************************ 00:04:57.900 20:31:41 -- common/autotest_common.sh@1104 -- # unittest_sock 00:04:57.900 20:31:41 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:04:57.900 00:04:57.900 00:04:57.900 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.900 http://cunit.sourceforge.net/ 00:04:57.900 00:04:57.900 00:04:57.900 Suite: sock 00:04:57.900 Test: posix_sock ...passed 00:04:57.900 Test: ut_sock ...passed 00:04:57.900 Test: posix_sock_group ...passed 00:04:57.900 Test: ut_sock_group ...passed 00:04:57.900 Test: posix_sock_group_fairness ...passed 00:04:57.900 Test: _posix_sock_close ...passed 00:04:57.900 Test: sock_get_default_opts ...passed 00:04:57.900 Test: ut_sock_impl_get_set_opts ...passed 00:04:57.900 Test: posix_sock_impl_get_set_opts ...passed 00:04:57.900 Test: ut_sock_map ...passed 00:04:57.900 Test: override_impl_opts ...passed 00:04:57.900 Test: ut_sock_group_get_ctx ...passed 00:04:57.900 00:04:57.900 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.901 suites 1 1 n/a 0 0 00:04:57.901 tests 12 12 12 0 0 00:04:57.901 asserts 349 349 349 0 n/a 00:04:57.901 00:04:57.901 Elapsed time = 0.000 seconds 00:04:57.901 20:31:41 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:04:57.901 00:04:57.901 00:04:57.901 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.901 http://cunit.sourceforge.net/ 00:04:57.901 00:04:57.901 00:04:57.901 Suite: posix 00:04:57.901 Test: flush ...passed 00:04:57.901 00:04:57.901 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.901 suites 1 1 n/a 0 0 00:04:57.901 tests 1 1 1 0 0 00:04:57.901 asserts 28 28 28 0 n/a 00:04:57.901 00:04:57.901 Elapsed time = 0.000 seconds 00:04:57.901 20:31:41 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:57.901 ************************************ 00:04:57.901 END TEST unittest_sock 00:04:57.901 ************************************ 00:04:57.901 00:04:57.901 real 0m0.102s 00:04:57.901 user 0m0.033s 00:04:57.901 sys 0m0.046s 00:04:57.901 20:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.901 20:31:41 -- common/autotest_common.sh@10 -- # set +x 00:04:57.901 20:31:41 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:04:57.901 20:31:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.901 20:31:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.901 20:31:41 -- common/autotest_common.sh@10 -- # set +x 00:04:57.901 ************************************ 00:04:57.901 START TEST unittest_thread 00:04:57.901 ************************************ 00:04:57.901 20:31:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:04:57.901 00:04:57.901 00:04:57.901 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.901 http://cunit.sourceforge.net/ 00:04:57.901 00:04:57.901 00:04:57.901 Suite: io_channel 00:04:57.901 Test: thread_alloc ...passed 00:04:57.901 Test: thread_send_msg ...passed 00:04:57.901 Test: thread_poller ...passed 00:04:57.901 Test: poller_pause ...passed 00:04:57.901 Test: thread_for_each ...passed 00:04:57.901 Test: for_each_channel_remove ...passed 00:04:57.901 Test: for_each_channel_unreg ...passed 00:04:57.901 Test: thread_name ...[2024-04-15 20:31:41.351785] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffe4c28a2c0 already registered (old:0x613000000200 new:0x6130000003c0) 00:04:57.901 passed 00:04:57.901 Test: channel ...passed 00:04:57.901 Test: channel_destroy_races ...passed 00:04:57.901 Test: thread_exit_test ...passed 00:04:57.901 Test: thread_update_stats_test ...passed 00:04:57.901 Test: nested_channel ...passed 00:04:57.901 Test: device_unregister_and_thread_exit_race ...[2024-04-15 20:31:41.353917] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x48e820 00:04:57.901 [2024-04-15 20:31:41.356529] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:04:57.901 passed 00:04:57.901 Test: cache_closest_timed_poller ...passed 00:04:57.901 Test: multi_timed_pollers_have_same_expiration ...passed 00:04:57.901 Test: io_device_lookup ...passed 00:04:57.901 Test: spdk_spin ...passed 00:04:57.901 Test: for_each_channel_and_thread_exit_race ...[2024-04-15 20:31:41.361178] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:04:57.901 [2024-04-15 20:31:41.361228] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe4c28a2a0 00:04:57.901 [2024-04-15 20:31:41.361287] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:04:57.901 [2024-04-15 20:31:41.362106] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:57.901 [2024-04-15 20:31:41.362140] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe4c28a2a0 00:04:57.901 [2024-04-15 20:31:41.362159] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:04:57.901 [2024-04-15 20:31:41.362182] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe4c28a2a0 00:04:57.901 [2024-04-15 20:31:41.362203] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:04:57.901 [2024-04-15 20:31:41.362226] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe4c28a2a0 00:04:57.901 [2024-04-15 20:31:41.362243] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:04:57.901 [2024-04-15 20:31:41.362275] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffe4c28a2a0 00:04:57.901 passed 00:04:57.901 Test: for_each_thread_and_thread_exit_race ...passed 00:04:57.901 00:04:57.901 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.901 suites 1 1 n/a 0 0 00:04:57.901 tests 20 20 20 0 0 00:04:57.901 asserts 409 409 409 0 n/a 00:04:57.901 00:04:57.901 Elapsed time = 0.010 seconds 00:04:57.901 ************************************ 00:04:57.901 END TEST unittest_thread 00:04:57.901 ************************************ 00:04:57.901 00:04:57.901 real 0m0.069s 00:04:57.901 user 0m0.046s 00:04:57.901 sys 0m0.023s 00:04:57.901 20:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.901 20:31:41 -- common/autotest_common.sh@10 -- # set +x 00:04:58.161 20:31:41 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:04:58.161 20:31:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.161 20:31:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.161 20:31:41 -- common/autotest_common.sh@10 -- # set +x 00:04:58.161 ************************************ 00:04:58.161 START TEST unittest_iobuf 00:04:58.161 ************************************ 00:04:58.161 20:31:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:04:58.161 00:04:58.161 00:04:58.161 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.161 http://cunit.sourceforge.net/ 00:04:58.161 00:04:58.161 00:04:58.161 Suite: io_channel 00:04:58.161 Test: iobuf ...passed 00:04:58.161 Test: iobuf_cache ...passed 00:04:58.161 00:04:58.161 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.161 suites 1 1 n/a 0 0 00:04:58.161 tests 2 2 2 0 0 00:04:58.161 asserts 107 107 107 0 n/a 00:04:58.161 00:04:58.161 Elapsed time = 0.010 seconds 00:04:58.161 [2024-04-15 20:31:41.470681] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:04:58.161 [2024-04-15 20:31:41.470930] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:58.161 [2024-04-15 20:31:41.471025] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:04:58.161 [2024-04-15 20:31:41.471057] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:58.161 [2024-04-15 20:31:41.471100] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:04:58.161 [2024-04-15 20:31:41.471139] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:58.161 ************************************ 00:04:58.161 END TEST unittest_iobuf 00:04:58.161 ************************************ 00:04:58.161 00:04:58.161 real 0m0.041s 00:04:58.161 user 0m0.021s 00:04:58.161 sys 0m0.020s 00:04:58.161 20:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.161 20:31:41 -- common/autotest_common.sh@10 -- # set +x 00:04:58.161 20:31:41 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:04:58.161 20:31:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.161 20:31:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.161 20:31:41 -- common/autotest_common.sh@10 -- # set +x 00:04:58.161 ************************************ 00:04:58.161 START TEST unittest_util 00:04:58.161 ************************************ 00:04:58.161 20:31:41 -- common/autotest_common.sh@1104 -- # unittest_util 00:04:58.161 20:31:41 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:04:58.161 00:04:58.161 00:04:58.161 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.161 http://cunit.sourceforge.net/ 00:04:58.161 00:04:58.161 00:04:58.161 Suite: base64 00:04:58.161 Test: test_base64_get_encoded_strlen ...passed 00:04:58.161 Test: test_base64_get_decoded_len ...passed 00:04:58.161 Test: test_base64_encode ...passed 00:04:58.161 Test: test_base64_decode ...passed 00:04:58.161 Test: test_base64_urlsafe_encode ...passed 00:04:58.161 Test: test_base64_urlsafe_decode ...passed 00:04:58.161 00:04:58.161 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.161 suites 1 1 n/a 0 0 00:04:58.161 tests 6 6 6 0 0 00:04:58.161 asserts 112 112 112 0 n/a 00:04:58.161 00:04:58.161 Elapsed time = 0.000 seconds 00:04:58.161 20:31:41 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:04:58.161 00:04:58.161 00:04:58.161 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.161 http://cunit.sourceforge.net/ 00:04:58.161 00:04:58.161 00:04:58.161 Suite: bit_array 00:04:58.161 Test: test_1bit ...passed 00:04:58.161 Test: test_64bit ...passed 00:04:58.161 Test: test_find ...passed 00:04:58.161 Test: test_resize ...passed 00:04:58.161 Test: test_errors ...passed 00:04:58.161 Test: test_count ...passed 00:04:58.161 Test: test_mask_store_load ...passed 00:04:58.161 Test: test_mask_clear ...passed 00:04:58.161 00:04:58.161 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.161 suites 1 1 n/a 0 0 00:04:58.161 tests 8 8 8 0 0 00:04:58.161 asserts 5075 5075 5075 0 n/a 00:04:58.161 00:04:58.161 Elapsed time = 0.000 seconds 00:04:58.161 20:31:41 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:04:58.161 00:04:58.161 00:04:58.161 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.161 http://cunit.sourceforge.net/ 00:04:58.161 00:04:58.161 00:04:58.161 Suite: cpuset 00:04:58.161 Test: test_cpuset ...passed 00:04:58.161 Test: test_cpuset_parse ...[2024-04-15 20:31:41.615224] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:04:58.161 [2024-04-15 20:31:41.615418] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:04:58.161 [2024-04-15 20:31:41.615507] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:04:58.161 [2024-04-15 20:31:41.615580] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:04:58.161 [2024-04-15 20:31:41.615603] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:04:58.161 [2024-04-15 20:31:41.615631] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:04:58.161 [2024-04-15 20:31:41.615665] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:04:58.161 passed 00:04:58.161 Test: test_cpuset_fmt ...[2024-04-15 20:31:41.615746] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:04:58.161 passed 00:04:58.161 00:04:58.161 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.161 suites 1 1 n/a 0 0 00:04:58.161 tests 3 3 3 0 0 00:04:58.161 asserts 65 65 65 0 n/a 00:04:58.161 00:04:58.161 Elapsed time = 0.000 seconds 00:04:58.161 20:31:41 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:04:58.161 00:04:58.161 00:04:58.161 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.161 http://cunit.sourceforge.net/ 00:04:58.161 00:04:58.161 00:04:58.161 Suite: crc16 00:04:58.161 Test: test_crc16_t10dif ...passed 00:04:58.161 Test: test_crc16_t10dif_seed ...passed 00:04:58.161 Test: test_crc16_t10dif_copy ...passed 00:04:58.161 00:04:58.161 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.161 suites 1 1 n/a 0 0 00:04:58.161 tests 3 3 3 0 0 00:04:58.161 asserts 5 5 5 0 n/a 00:04:58.161 00:04:58.161 Elapsed time = 0.000 seconds 00:04:58.161 20:31:41 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:04:58.423 00:04:58.423 00:04:58.423 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.423 http://cunit.sourceforge.net/ 00:04:58.423 00:04:58.423 00:04:58.423 Suite: crc32_ieee 00:04:58.423 Test: test_crc32_ieee ...passed 00:04:58.423 00:04:58.423 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.423 suites 1 1 n/a 0 0 00:04:58.423 tests 1 1 1 0 0 00:04:58.423 asserts 1 1 1 0 n/a 00:04:58.423 00:04:58.423 Elapsed time = 0.000 seconds 00:04:58.423 20:31:41 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:04:58.423 00:04:58.423 00:04:58.423 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.423 http://cunit.sourceforge.net/ 00:04:58.423 00:04:58.423 00:04:58.423 Suite: crc32c 00:04:58.423 Test: test_crc32c ...passed 00:04:58.423 Test: test_crc32c_nvme ...passed 00:04:58.423 00:04:58.423 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.423 suites 1 1 n/a 0 0 00:04:58.423 tests 2 2 2 0 0 00:04:58.423 asserts 16 16 16 0 n/a 00:04:58.423 00:04:58.423 Elapsed time = 0.000 seconds 00:04:58.423 20:31:41 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:04:58.423 00:04:58.423 00:04:58.423 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.423 http://cunit.sourceforge.net/ 00:04:58.423 00:04:58.423 00:04:58.423 Suite: crc64 00:04:58.423 Test: test_crc64_nvme ...passed 00:04:58.423 00:04:58.423 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.423 suites 1 1 n/a 0 0 00:04:58.423 tests 1 1 1 0 0 00:04:58.423 asserts 4 4 4 0 n/a 00:04:58.423 00:04:58.423 Elapsed time = 0.000 seconds 00:04:58.423 20:31:41 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:04:58.423 00:04:58.423 00:04:58.423 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.423 http://cunit.sourceforge.net/ 00:04:58.423 00:04:58.423 00:04:58.423 Suite: string 00:04:58.423 Test: test_parse_ip_addr ...passed 00:04:58.423 Test: test_str_chomp ...passed 00:04:58.423 Test: test_parse_capacity ...passed 00:04:58.423 Test: test_sprintf_append_realloc ...passed 00:04:58.423 Test: test_strtol ...passed 00:04:58.423 Test: test_strtoll ...passed 00:04:58.423 Test: test_strarray ...passed 00:04:58.423 Test: test_strcpy_replace ...passed 00:04:58.423 00:04:58.423 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.423 suites 1 1 n/a 0 0 00:04:58.423 tests 8 8 8 0 0 00:04:58.423 asserts 161 161 161 0 n/a 00:04:58.423 00:04:58.423 Elapsed time = 0.000 seconds 00:04:58.423 20:31:41 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:04:58.423 00:04:58.423 00:04:58.423 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.423 http://cunit.sourceforge.net/ 00:04:58.423 00:04:58.423 00:04:58.423 Suite: dif 00:04:58.423 Test: dif_generate_and_verify_test ...[2024-04-15 20:31:41.796388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:04:58.423 [2024-04-15 20:31:41.796912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:04:58.423 passed 00:04:58.423 Test: dif_disable_check_test ...passed 00:04:58.423 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-04-15 20:31:41.797196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:04:58.423 [2024-04-15 20:31:41.797430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:04:58.423 [2024-04-15 20:31:41.797625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:04:58.423 [2024-04-15 20:31:41.797918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:04:58.423 [2024-04-15 20:31:41.798741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:04:58.423 [2024-04-15 20:31:41.799027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:04:58.423 [2024-04-15 20:31:41.799296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:04:58.423 [2024-04-15 20:31:41.800175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:04:58.423 [2024-04-15 20:31:41.800440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:04:58.423 [2024-04-15 20:31:41.800787] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:04:58.423 [2024-04-15 20:31:41.801150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:04:58.423 [2024-04-15 20:31:41.801391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:58.423 [2024-04-15 20:31:41.801595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:58.423 [2024-04-15 20:31:41.801822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:58.423 [2024-04-15 20:31:41.802015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:58.423 [2024-04-15 20:31:41.802216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:04:58.423 [2024-04-15 20:31:41.802408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:04:58.423 passed 00:04:58.423 Test: dif_apptag_mask_test ...[2024-04-15 20:31:41.802675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:04:58.423 [2024-04-15 20:31:41.802890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:04:58.423 [2024-04-15 20:31:41.803115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:04:58.423 passed 00:04:58.423 Test: dif_sec_512_md_0_error_test ...passed 00:04:58.423 Test: dif_sec_4096_md_0_error_test ...passed 00:04:58.423 Test: dif_sec_4100_md_128_error_test ...[2024-04-15 20:31:41.803245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:58.423 [2024-04-15 20:31:41.803290] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:58.423 [2024-04-15 20:31:41.803337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:58.423 [2024-04-15 20:31:41.803397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:04:58.423 passed 00:04:58.423 Test: dif_guard_seed_test ...[2024-04-15 20:31:41.803442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:04:58.423 passed 00:04:58.423 Test: dif_guard_value_test ...passed 00:04:58.423 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:04:58.423 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:04:58.423 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:04:58.423 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:58.423 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:58.423 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:04:58.423 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:04:58.423 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:04:58.423 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:04:58.423 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:04:58.423 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:04:58.423 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:04:58.423 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:04:58.423 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:04:58.423 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:04:58.423 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:04:58.423 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:58.423 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:58.423 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-15 20:31:41.824810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=f94c, Actual=fd4c 00:04:58.423 [2024-04-15 20:31:41.825803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fa21, Actual=fe21 00:04:58.423 [2024-04-15 20:31:41.826863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.827853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.828850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.423 [2024-04-15 20:31:41.829824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.423 [2024-04-15 20:31:41.830793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=d380 00:04:58.423 [2024-04-15 20:31:41.831656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fe21, Actual=8872 00:04:58.423 [2024-04-15 20:31:41.832328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab757ed, Actual=1ab753ed 00:04:58.423 [2024-04-15 20:31:41.833131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38574260, Actual=38574660 00:04:58.423 [2024-04-15 20:31:41.833934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.834725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.835517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.423 [2024-04-15 20:31:41.836305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.423 [2024-04-15 20:31:41.837109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=2e5b571b 00:04:58.423 [2024-04-15 20:31:41.837870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38574660, Actual=87efedcb 00:04:58.423 [2024-04-15 20:31:41.839092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:04:58.423 [2024-04-15 20:31:41.840430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:04:58.423 [2024-04-15 20:31:41.841742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.843076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.844387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:04:58.423 [2024-04-15 20:31:41.845741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:04:58.423 [2024-04-15 20:31:41.847065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=7ebea035ea6059fd 00:04:58.423 passed 00:04:58.423 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-04-15 20:31:41.848280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010a2d4837a266, Actual=fbd975ae99fba30a 00:04:58.423 [2024-04-15 20:31:41.848637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:04:58.423 [2024-04-15 20:31:41.848769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:04:58.423 [2024-04-15 20:31:41.848886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.849009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.849146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.423 [2024-04-15 20:31:41.849264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.423 [2024-04-15 20:31:41.849385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d380 00:04:58.423 [2024-04-15 20:31:41.849493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8872 00:04:58.423 [2024-04-15 20:31:41.849588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:04:58.423 [2024-04-15 20:31:41.849693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:04:58.423 [2024-04-15 20:31:41.849803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.849899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.850006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.423 [2024-04-15 20:31:41.850102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.423 [2024-04-15 20:31:41.850207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2e5b571b 00:04:58.423 [2024-04-15 20:31:41.850292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=87efedcb 00:04:58.423 [2024-04-15 20:31:41.850452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:04:58.423 [2024-04-15 20:31:41.850612] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:04:58.423 [2024-04-15 20:31:41.850788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.850948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.851119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.423 [2024-04-15 20:31:41.851280] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.423 [2024-04-15 20:31:41.851449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7ebea035ea6059fd 00:04:58.423 [2024-04-15 20:31:41.851603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=fbd975ae99fba30a 00:04:58.423 passed 00:04:58.423 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-04-15 20:31:41.851831] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:04:58.423 [2024-04-15 20:31:41.851977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:04:58.423 [2024-04-15 20:31:41.852095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.852221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.852347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.423 [2024-04-15 20:31:41.852480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.423 [2024-04-15 20:31:41.852600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d380 00:04:58.423 [2024-04-15 20:31:41.852720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8872 00:04:58.423 [2024-04-15 20:31:41.852808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:04:58.423 [2024-04-15 20:31:41.852909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:04:58.423 [2024-04-15 20:31:41.853006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.423 passed 00:04:58.423 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-04-15 20:31:41.853105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.423 [2024-04-15 20:31:41.853208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.853308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.853409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2e5b571b 00:04:58.424 [2024-04-15 20:31:41.853503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=87efedcb 00:04:58.424 [2024-04-15 20:31:41.853678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:04:58.424 [2024-04-15 20:31:41.853841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:04:58.424 [2024-04-15 20:31:41.854004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.854163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.854327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.424 [2024-04-15 20:31:41.854486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.424 [2024-04-15 20:31:41.854674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7ebea035ea6059fd 00:04:58.424 [2024-04-15 20:31:41.854824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=fbd975ae99fba30a 00:04:58.424 [2024-04-15 20:31:41.854958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:04:58.424 [2024-04-15 20:31:41.855084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:04:58.424 [2024-04-15 20:31:41.855206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 passed 00:04:58.424 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...passed 00:04:58.424 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...passed 00:04:58.424 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-04-15 20:31:41.855324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.855461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.855579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.855707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d380 00:04:58.424 [2024-04-15 20:31:41.855815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8872 00:04:58.424 [2024-04-15 20:31:41.855907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:04:58.424 [2024-04-15 20:31:41.856008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:04:58.424 [2024-04-15 20:31:41.856117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.856218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.856315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.856424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.856526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2e5b571b 00:04:58.424 [2024-04-15 20:31:41.856617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=87efedcb 00:04:58.424 [2024-04-15 20:31:41.856779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:04:58.424 [2024-04-15 20:31:41.856943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:04:58.424 [2024-04-15 20:31:41.857101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.857268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.857433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.424 [2024-04-15 20:31:41.857597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.424 [2024-04-15 20:31:41.857775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7ebea035ea6059fd 00:04:58.424 [2024-04-15 20:31:41.857930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=fbd975ae99fba30a 00:04:58.424 [2024-04-15 20:31:41.858053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:04:58.424 [2024-04-15 20:31:41.858175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:04:58.424 [2024-04-15 20:31:41.858296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.858414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.858549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.858674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.858798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d380 00:04:58.424 [2024-04-15 20:31:41.858905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8872 00:04:58.424 [2024-04-15 20:31:41.859022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:04:58.424 [2024-04-15 20:31:41.859120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:04:58.424 [2024-04-15 20:31:41.859235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.859332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.859439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.859536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.859649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2e5b571b 00:04:58.424 [2024-04-15 20:31:41.859735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=87efedcb 00:04:58.424 [2024-04-15 20:31:41.859909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:04:58.424 [2024-04-15 20:31:41.860078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:04:58.424 [2024-04-15 20:31:41.860238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.860402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.860573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.424 [2024-04-15 20:31:41.860744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.424 [2024-04-15 20:31:41.860936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7ebea035ea6059fd 00:04:58.424 [2024-04-15 20:31:41.861090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=fbd975ae99fba30a 00:04:58.424 [2024-04-15 20:31:41.861206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:04:58.424 [2024-04-15 20:31:41.861333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:04:58.424 [2024-04-15 20:31:41.861451] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.861574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.861720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.861839] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 passed 00:04:58.424 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-04-15 20:31:41.861963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d380 00:04:58.424 [2024-04-15 20:31:41.862071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8872 00:04:58.424 [2024-04-15 20:31:41.862179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:04:58.424 [2024-04-15 20:31:41.862279] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:04:58.424 [2024-04-15 20:31:41.862389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 passed 00:04:58.424 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:04:58.424 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...[2024-04-15 20:31:41.862487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.862590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.862708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.862824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2e5b571b 00:04:58.424 [2024-04-15 20:31:41.862910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=87efedcb 00:04:58.424 [2024-04-15 20:31:41.863084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:04:58.424 [2024-04-15 20:31:41.863245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a666, Actual=88010a2d4837a266 00:04:58.424 [2024-04-15 20:31:41.863410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.863571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.863747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.424 [2024-04-15 20:31:41.863907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.424 [2024-04-15 20:31:41.864079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7ebea035ea6059fd 00:04:58.424 [2024-04-15 20:31:41.864235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=fbd975ae99fba30a 00:04:58.424 passed 00:04:58.424 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:04:58.424 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:58.424 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:04:58.424 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:04:58.424 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:04:58.424 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:04:58.424 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:58.424 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-15 20:31:41.881493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=f94c, Actual=fd4c 00:04:58.424 [2024-04-15 20:31:41.882155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=5a9f, Actual=5e9f 00:04:58.424 [2024-04-15 20:31:41.882789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.883417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.884045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.424 [2024-04-15 20:31:41.884692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.424 [2024-04-15 20:31:41.885315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=d380 00:04:58.424 [2024-04-15 20:31:41.885953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=ccdf 00:04:58.424 [2024-04-15 20:31:41.886398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab757ed, Actual=1ab753ed 00:04:58.424 [2024-04-15 20:31:41.886851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=91ea24ee, Actual=91ea20ee 00:04:58.424 [2024-04-15 20:31:41.887295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.887765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.888207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.424 [2024-04-15 20:31:41.888669] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.424 [2024-04-15 20:31:41.889111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=2e5b571b 00:04:58.424 [2024-04-15 20:31:41.889559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=76a8ff70 00:04:58.424 [2024-04-15 20:31:41.890527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:04:58.424 [2024-04-15 20:31:41.891624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fbb1f42d9f7013e4, Actual=fbb1f42d9f7017e4 00:04:58.424 [2024-04-15 20:31:41.892617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.893603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.894572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:04:58.424 [2024-04-15 20:31:41.895576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:04:58.424 passed 00:04:58.424 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...passed 00:04:58.424 Test: dix_sec_512_md_0_error ...passed 00:04:58.424 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:04:58.424 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...[2024-04-15 20:31:41.896553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=7ebea035ea6059fd 00:04:58.424 [2024-04-15 20:31:41.897555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=5ec1ceeb9ec5be0b 00:04:58.424 [2024-04-15 20:31:41.897738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:04:58.424 [2024-04-15 20:31:41.897880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:04:58.424 [2024-04-15 20:31:41.898023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.898163] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.898311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.898451] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.898585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d380 00:04:58.424 [2024-04-15 20:31:41.898730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=5b45 00:04:58.424 [2024-04-15 20:31:41.898836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:04:58.424 [2024-04-15 20:31:41.898944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b25c3099, Actual=b25c3499 00:04:58.424 [2024-04-15 20:31:41.899059] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.899170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.899272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.899384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.424 [2024-04-15 20:31:41.899485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2e5b571b 00:04:58.424 [2024-04-15 20:31:41.899591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=551eeb07 00:04:58.424 [2024-04-15 20:31:41.899808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:04:58.424 [2024-04-15 20:31:41.900005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eece6f5e86a39c21, Actual=eece6f5e86a39821 00:04:58.424 [2024-04-15 20:31:41.900203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.900405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.424 [2024-04-15 20:31:41.900611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.424 [2024-04-15 20:31:41.900814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.424 [2024-04-15 20:31:41.901022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7ebea035ea6059fd 00:04:58.424 [2024-04-15 20:31:41.901231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=4bbe5598871631ce 00:04:58.424 [2024-04-15 20:31:41.901273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:58.424 passed 00:04:58.424 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:04:58.424 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:58.424 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:04:58.424 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:04:58.424 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:04:58.424 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:04:58.424 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:58.424 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-04-15 20:31:41.918077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=f94c, Actual=fd4c 00:04:58.424 [2024-04-15 20:31:41.918721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=5a9f, Actual=5e9f 00:04:58.684 [2024-04-15 20:31:41.919347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.919977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.920618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.684 [2024-04-15 20:31:41.921251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.684 [2024-04-15 20:31:41.921872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=d380 00:04:58.684 [2024-04-15 20:31:41.922498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=ccdf 00:04:58.684 [2024-04-15 20:31:41.922945] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab757ed, Actual=1ab753ed 00:04:58.684 [2024-04-15 20:31:41.923388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=91ea24ee, Actual=91ea20ee 00:04:58.684 [2024-04-15 20:31:41.923856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.924305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.924757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.684 [2024-04-15 20:31:41.925202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:04:58.684 [2024-04-15 20:31:41.925648] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=2e5b571b 00:04:58.684 [2024-04-15 20:31:41.926093] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=76a8ff70 00:04:58.684 [2024-04-15 20:31:41.927067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:04:58.684 [2024-04-15 20:31:41.928136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fbb1f42d9f7013e4, Actual=fbb1f42d9f7017e4 00:04:58.684 passed 00:04:58.684 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-04-15 20:31:41.929126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.930114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.931088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:04:58.684 [2024-04-15 20:31:41.932070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:04:58.684 [2024-04-15 20:31:41.933054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=7ebea035ea6059fd 00:04:58.684 [2024-04-15 20:31:41.934039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=5ec1ceeb9ec5be0b 00:04:58.684 [2024-04-15 20:31:41.934210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:04:58.684 [2024-04-15 20:31:41.934361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:04:58.684 [2024-04-15 20:31:41.934499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.934633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.934790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.684 [2024-04-15 20:31:41.934924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.684 passed 00:04:58.684 Test: set_md_interleave_iovs_test ...[2024-04-15 20:31:41.935063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=d380 00:04:58.684 [2024-04-15 20:31:41.935198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=5b45 00:04:58.684 [2024-04-15 20:31:41.935304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:04:58.684 [2024-04-15 20:31:41.935407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b25c3099, Actual=b25c3499 00:04:58.684 [2024-04-15 20:31:41.935521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.935630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.935739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.684 [2024-04-15 20:31:41.935848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:04:58.684 [2024-04-15 20:31:41.935949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=2e5b571b 00:04:58.684 [2024-04-15 20:31:41.936055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=551eeb07 00:04:58.684 [2024-04-15 20:31:41.936256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc24d3, Actual=a576a7728ecc20d3 00:04:58.684 [2024-04-15 20:31:41.936485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eece6f5e86a39c21, Actual=eece6f5e86a39821 00:04:58.684 [2024-04-15 20:31:41.936700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.936901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:04:58.684 [2024-04-15 20:31:41.937095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.685 [2024-04-15 20:31:41.937295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:04:58.685 [2024-04-15 20:31:41.937490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=7ebea035ea6059fd 00:04:58.685 [2024-04-15 20:31:41.937694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=4bbe5598871631ce 00:04:58.685 passed 00:04:58.685 Test: set_md_interleave_iovs_split_test ...passed 00:04:58.685 Test: dif_generate_stream_pi_16_test ...passed 00:04:58.685 Test: dif_generate_stream_test ...passed 00:04:58.685 Test: set_md_interleave_iovs_alignment_test ...passed 00:04:58.685 Test: dif_generate_split_test ...passed 00:04:58.685 Test: set_md_interleave_iovs_multi_segments_test ...[2024-04-15 20:31:41.941318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:04:58.685 passed 00:04:58.685 Test: dif_verify_split_test ...passed 00:04:58.685 Test: dif_verify_stream_multi_segments_test ...passed 00:04:58.685 Test: update_crc32c_pi_16_test ...passed 00:04:58.685 Test: update_crc32c_test ...passed 00:04:58.685 Test: dif_update_crc32c_split_test ...passed 00:04:58.685 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:04:58.685 Test: get_range_with_md_test ...passed 00:04:58.685 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:04:58.685 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:04:58.685 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:04:58.685 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:04:58.685 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:04:58.685 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:04:58.685 Test: dif_generate_and_verify_unmap_test ...passed 00:04:58.685 00:04:58.685 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.685 suites 1 1 n/a 0 0 00:04:58.685 tests 79 79 79 0 0 00:04:58.685 asserts 3584 3584 3584 0 n/a 00:04:58.685 00:04:58.685 Elapsed time = 0.150 seconds 00:04:58.685 20:31:41 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:04:58.685 00:04:58.685 00:04:58.685 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.685 http://cunit.sourceforge.net/ 00:04:58.685 00:04:58.685 00:04:58.685 Suite: iov 00:04:58.685 Test: test_single_iov ...passed 00:04:58.685 Test: test_simple_iov ...passed 00:04:58.685 Test: test_complex_iov ...passed 00:04:58.685 Test: test_iovs_to_buf ...passed 00:04:58.685 Test: test_buf_to_iovs ...passed 00:04:58.685 Test: test_memset ...passed 00:04:58.685 Test: test_iov_one ...passed 00:04:58.685 Test: test_iov_xfer ...passed 00:04:58.685 00:04:58.685 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.685 suites 1 1 n/a 0 0 00:04:58.685 tests 8 8 8 0 0 00:04:58.685 asserts 156 156 156 0 n/a 00:04:58.685 00:04:58.685 Elapsed time = 0.000 seconds 00:04:58.685 20:31:42 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:04:58.685 00:04:58.685 00:04:58.685 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.685 http://cunit.sourceforge.net/ 00:04:58.685 00:04:58.685 00:04:58.685 Suite: math 00:04:58.685 Test: test_serial_number_arithmetic ...passed 00:04:58.685 Suite: erase 00:04:58.685 Test: test_memset_s ...passed 00:04:58.685 00:04:58.685 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.685 suites 2 2 n/a 0 0 00:04:58.685 tests 2 2 2 0 0 00:04:58.685 asserts 18 18 18 0 n/a 00:04:58.685 00:04:58.685 Elapsed time = 0.000 seconds 00:04:58.685 20:31:42 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:04:58.685 00:04:58.685 00:04:58.685 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.685 http://cunit.sourceforge.net/ 00:04:58.685 00:04:58.685 00:04:58.685 Suite: pipe 00:04:58.685 Test: test_create_destroy ...passed 00:04:58.685 Test: test_write_get_buffer ...passed 00:04:58.685 Test: test_write_advance ...passed 00:04:58.685 Test: test_read_get_buffer ...passed 00:04:58.685 Test: test_read_advance ...passed 00:04:58.685 Test: test_data ...passed 00:04:58.685 00:04:58.685 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.685 suites 1 1 n/a 0 0 00:04:58.685 tests 6 6 6 0 0 00:04:58.685 asserts 250 250 250 0 n/a 00:04:58.685 00:04:58.685 Elapsed time = 0.000 seconds 00:04:58.685 20:31:42 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:04:58.685 00:04:58.685 00:04:58.685 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.685 http://cunit.sourceforge.net/ 00:04:58.685 00:04:58.685 00:04:58.685 Suite: xor 00:04:58.685 Test: test_xor_gen ...passed 00:04:58.685 00:04:58.685 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.685 suites 1 1 n/a 0 0 00:04:58.685 tests 1 1 1 0 0 00:04:58.685 asserts 17 17 17 0 n/a 00:04:58.685 00:04:58.685 Elapsed time = 0.010 seconds 00:04:58.685 ************************************ 00:04:58.685 END TEST unittest_util 00:04:58.685 ************************************ 00:04:58.685 00:04:58.685 real 0m0.547s 00:04:58.685 user 0m0.357s 00:04:58.685 sys 0m0.196s 00:04:58.685 20:31:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.685 20:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:58.685 20:31:42 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:58.685 20:31:42 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:04:58.685 20:31:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.685 20:31:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.685 20:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:58.685 ************************************ 00:04:58.685 START TEST unittest_vhost 00:04:58.685 ************************************ 00:04:58.685 20:31:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:04:58.685 00:04:58.685 00:04:58.685 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.685 http://cunit.sourceforge.net/ 00:04:58.685 00:04:58.685 00:04:58.685 Suite: vhost_suite 00:04:58.685 Test: desc_to_iov_test ...passed 00:04:58.685 Test: create_controller_test ...[2024-04-15 20:31:42.179226] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:04:58.957 [2024-04-15 20:31:42.182568] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:04:58.957 [2024-04-15 20:31:42.182668] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:04:58.957 [2024-04-15 20:31:42.182751] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:04:58.957 [2024-04-15 20:31:42.182814] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:04:58.957 [2024-04-15 20:31:42.182862] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:04:58.957 [2024-04-15 20:31:42.183029] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-04-15 20:31:42.183779] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:04:58.957 passed 00:04:58.957 Test: session_find_by_vid_test ...passed 00:04:58.957 Test: remove_controller_test ...passed 00:04:58.957 Test: vq_avail_ring_get_test ...passed 00:04:58.957 Test: vq_packed_ring_test ...passed 00:04:58.957 Test: vhost_blk_construct_test ...passed 00:04:58.957 00:04:58.957 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.957 suites 1 1 n/a 0 0 00:04:58.957 tests 7 7 7 0 0 00:04:58.957 asserts 145 145 145 0 n/a 00:04:58.957 00:04:58.957 Elapsed time = 0.010 seconds 00:04:58.957 [2024-04-15 20:31:42.185532] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:04:58.957 ************************************ 00:04:58.957 END TEST unittest_vhost 00:04:58.957 ************************************ 00:04:58.957 00:04:58.957 real 0m0.040s 00:04:58.957 user 0m0.027s 00:04:58.957 sys 0m0.013s 00:04:58.957 20:31:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.957 20:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:58.957 20:31:42 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:04:58.957 20:31:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.957 20:31:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.957 20:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:58.957 ************************************ 00:04:58.957 START TEST unittest_dma 00:04:58.957 ************************************ 00:04:58.957 20:31:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:04:58.957 00:04:58.957 00:04:58.957 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.957 http://cunit.sourceforge.net/ 00:04:58.957 00:04:58.957 00:04:58.957 Suite: dma_suite 00:04:58.957 Test: test_dma ...passed 00:04:58.957 00:04:58.957 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.957 suites 1 1 n/a 0 0 00:04:58.957 tests 1 1 1 0 0 00:04:58.957 asserts 50 50 50 0 n/a 00:04:58.957 00:04:58.957 Elapsed time = 0.000 seconds 00:04:58.957 [2024-04-15 20:31:42.275251] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:04:58.957 ************************************ 00:04:58.957 END TEST unittest_dma 00:04:58.957 ************************************ 00:04:58.957 00:04:58.957 real 0m0.026s 00:04:58.957 user 0m0.018s 00:04:58.957 sys 0m0.009s 00:04:58.957 20:31:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.957 20:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:58.957 20:31:42 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:04:58.957 20:31:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.957 20:31:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.957 20:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:58.957 ************************************ 00:04:58.957 START TEST unittest_init 00:04:58.957 ************************************ 00:04:58.957 20:31:42 -- common/autotest_common.sh@1104 -- # unittest_init 00:04:58.957 20:31:42 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:04:58.957 00:04:58.957 00:04:58.957 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.957 http://cunit.sourceforge.net/ 00:04:58.957 00:04:58.957 00:04:58.957 Suite: subsystem_suite 00:04:58.957 Test: subsystem_sort_test_depends_on_single ...passed 00:04:58.957 Test: subsystem_sort_test_depends_on_multiple ...passed 00:04:58.957 Test: subsystem_sort_test_missing_dependency ...[2024-04-15 20:31:42.368758] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:04:58.957 [2024-04-15 20:31:42.369052] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:04:58.957 passed 00:04:58.957 00:04:58.957 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.957 suites 1 1 n/a 0 0 00:04:58.957 tests 3 3 3 0 0 00:04:58.957 asserts 20 20 20 0 n/a 00:04:58.957 00:04:58.957 Elapsed time = 0.000 seconds 00:04:58.957 ************************************ 00:04:58.957 END TEST unittest_init 00:04:58.957 ************************************ 00:04:58.957 00:04:58.957 real 0m0.038s 00:04:58.957 user 0m0.022s 00:04:58.957 sys 0m0.017s 00:04:58.957 20:31:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.957 20:31:42 -- common/autotest_common.sh@10 -- # set +x 00:04:58.957 20:31:42 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:04:58.957 20:31:42 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:58.957 20:31:42 -- unit/unittest.sh@290 -- # hostname 00:04:58.957 20:31:42 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t centos7-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:04:59.216 geninfo: WARNING: invalid characters removed from testname! 00:05:25.765 20:32:09 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:05:29.999 20:32:12 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:05:32.533 20:32:15 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:05:34.439 20:32:17 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:05:36.346 20:32:19 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:05:38.931 20:32:21 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:05:40.833 20:32:24 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:05:42.737 20:32:25 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:05:42.737 20:32:25 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:43.305 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:05:43.305 Found 308 entries. 00:05:43.305 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:05:43.305 Writing .css and .png files. 00:05:43.305 Generating output. 00:05:43.305 Processing file include/linux/virtio_ring.h 00:05:43.563 Processing file include/spdk/util.h 00:05:43.563 Processing file include/spdk/endian.h 00:05:43.563 Processing file include/spdk/thread.h 00:05:43.563 Processing file include/spdk/nvme.h 00:05:43.563 Processing file include/spdk/histogram_data.h 00:05:43.563 Processing file include/spdk/nvme_spec.h 00:05:43.563 Processing file include/spdk/bdev_module.h 00:05:43.563 Processing file include/spdk/trace.h 00:05:43.563 Processing file include/spdk/mmio.h 00:05:43.563 Processing file include/spdk/nvmf_transport.h 00:05:43.563 Processing file include/spdk/base64.h 00:05:43.821 Processing file include/spdk_internal/rdma.h 00:05:43.821 Processing file include/spdk_internal/nvme_tcp.h 00:05:43.821 Processing file include/spdk_internal/sock.h 00:05:43.821 Processing file include/spdk_internal/utf.h 00:05:43.821 Processing file include/spdk_internal/sgl.h 00:05:43.821 Processing file include/spdk_internal/virtio.h 00:05:43.821 Processing file lib/accel/accel_sw.c 00:05:43.821 Processing file lib/accel/accel.c 00:05:43.821 Processing file lib/accel/accel_rpc.c 00:05:44.079 Processing file lib/bdev/bdev.c 00:05:44.079 Processing file lib/bdev/bdev_zone.c 00:05:44.079 Processing file lib/bdev/part.c 00:05:44.079 Processing file lib/bdev/bdev_rpc.c 00:05:44.079 Processing file lib/bdev/scsi_nvme.c 00:05:44.338 Processing file lib/blob/blob_bs_dev.c 00:05:44.338 Processing file lib/blob/blobstore.h 00:05:44.338 Processing file lib/blob/request.c 00:05:44.338 Processing file lib/blob/blobstore.c 00:05:44.338 Processing file lib/blob/zeroes.c 00:05:44.338 Processing file lib/blobfs/blobfs.c 00:05:44.338 Processing file lib/blobfs/tree.c 00:05:44.338 Processing file lib/conf/conf.c 00:05:44.596 Processing file lib/dma/dma.c 00:05:44.855 Processing file lib/env_dpdk/pci_virtio.c 00:05:44.855 Processing file lib/env_dpdk/pci_event.c 00:05:44.855 Processing file lib/env_dpdk/pci_vmd.c 00:05:44.855 Processing file lib/env_dpdk/pci_dpdk.c 00:05:44.855 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:05:44.855 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:05:44.855 Processing file lib/env_dpdk/pci_ioat.c 00:05:44.855 Processing file lib/env_dpdk/sigbus_handler.c 00:05:44.855 Processing file lib/env_dpdk/threads.c 00:05:44.855 Processing file lib/env_dpdk/pci_idxd.c 00:05:44.855 Processing file lib/env_dpdk/memory.c 00:05:44.855 Processing file lib/env_dpdk/pci.c 00:05:44.855 Processing file lib/env_dpdk/init.c 00:05:44.855 Processing file lib/env_dpdk/env.c 00:05:44.855 Processing file lib/event/app_rpc.c 00:05:44.855 Processing file lib/event/reactor.c 00:05:44.855 Processing file lib/event/app.c 00:05:44.855 Processing file lib/event/scheduler_static.c 00:05:44.855 Processing file lib/event/log_rpc.c 00:05:45.423 Processing file lib/ftl/ftl_debug.h 00:05:45.423 Processing file lib/ftl/ftl_debug.c 00:05:45.423 Processing file lib/ftl/ftl_core.c 00:05:45.423 Processing file lib/ftl/ftl_io.c 00:05:45.423 Processing file lib/ftl/ftl_core.h 00:05:45.423 Processing file lib/ftl/ftl_io.h 00:05:45.423 Processing file lib/ftl/ftl_band.h 00:05:45.423 Processing file lib/ftl/ftl_writer.c 00:05:45.423 Processing file lib/ftl/ftl_band.c 00:05:45.423 Processing file lib/ftl/ftl_trace.c 00:05:45.423 Processing file lib/ftl/ftl_writer.h 00:05:45.423 Processing file lib/ftl/ftl_sb.c 00:05:45.423 Processing file lib/ftl/ftl_p2l.c 00:05:45.423 Processing file lib/ftl/ftl_rq.c 00:05:45.423 Processing file lib/ftl/ftl_band_ops.c 00:05:45.423 Processing file lib/ftl/ftl_init.c 00:05:45.423 Processing file lib/ftl/ftl_nv_cache_io.h 00:05:45.423 Processing file lib/ftl/ftl_nv_cache.c 00:05:45.423 Processing file lib/ftl/ftl_nv_cache.h 00:05:45.423 Processing file lib/ftl/ftl_l2p_flat.c 00:05:45.423 Processing file lib/ftl/ftl_l2p.c 00:05:45.423 Processing file lib/ftl/ftl_reloc.c 00:05:45.423 Processing file lib/ftl/ftl_l2p_cache.c 00:05:45.423 Processing file lib/ftl/ftl_layout.c 00:05:45.423 Processing file lib/ftl/base/ftl_base_bdev.c 00:05:45.423 Processing file lib/ftl/base/ftl_base_dev.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:05:45.683 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:05:45.683 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:05:45.683 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:05:45.942 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:05:45.942 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:05:45.942 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:05:45.942 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:05:45.942 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:05:45.942 Processing file lib/ftl/utils/ftl_property.h 00:05:45.942 Processing file lib/ftl/utils/ftl_bitmap.c 00:05:45.942 Processing file lib/ftl/utils/ftl_conf.c 00:05:45.942 Processing file lib/ftl/utils/ftl_df.h 00:05:45.942 Processing file lib/ftl/utils/ftl_md.c 00:05:45.942 Processing file lib/ftl/utils/ftl_addr_utils.h 00:05:45.942 Processing file lib/ftl/utils/ftl_mempool.c 00:05:45.942 Processing file lib/ftl/utils/ftl_property.c 00:05:46.202 Processing file lib/idxd/idxd.c 00:05:46.202 Processing file lib/idxd/idxd_user.c 00:05:46.202 Processing file lib/idxd/idxd_internal.h 00:05:46.202 Processing file lib/init/subsystem_rpc.c 00:05:46.202 Processing file lib/init/rpc.c 00:05:46.202 Processing file lib/init/json_config.c 00:05:46.202 Processing file lib/init/subsystem.c 00:05:46.202 Processing file lib/ioat/ioat_internal.h 00:05:46.202 Processing file lib/ioat/ioat.c 00:05:46.770 Processing file lib/iscsi/init_grp.c 00:05:46.770 Processing file lib/iscsi/task.h 00:05:46.770 Processing file lib/iscsi/iscsi_subsystem.c 00:05:46.770 Processing file lib/iscsi/conn.c 00:05:46.770 Processing file lib/iscsi/tgt_node.c 00:05:46.770 Processing file lib/iscsi/iscsi_rpc.c 00:05:46.770 Processing file lib/iscsi/portal_grp.c 00:05:46.770 Processing file lib/iscsi/iscsi.h 00:05:46.770 Processing file lib/iscsi/param.c 00:05:46.770 Processing file lib/iscsi/iscsi.c 00:05:46.770 Processing file lib/iscsi/md5.c 00:05:46.770 Processing file lib/iscsi/task.c 00:05:46.770 Processing file lib/json/json_parse.c 00:05:46.770 Processing file lib/json/json_util.c 00:05:46.770 Processing file lib/json/json_write.c 00:05:46.770 Processing file lib/jsonrpc/jsonrpc_server.c 00:05:46.770 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:05:46.770 Processing file lib/jsonrpc/jsonrpc_client.c 00:05:46.770 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:05:47.029 Processing file lib/log/log_flags.c 00:05:47.029 Processing file lib/log/log_deprecated.c 00:05:47.029 Processing file lib/log/log.c 00:05:47.029 Processing file lib/lvol/lvol.c 00:05:47.029 Processing file lib/nbd/nbd.c 00:05:47.029 Processing file lib/nbd/nbd_rpc.c 00:05:47.288 Processing file lib/notify/notify_rpc.c 00:05:47.288 Processing file lib/notify/notify.c 00:05:47.856 Processing file lib/nvme/nvme_cuse.c 00:05:47.856 Processing file lib/nvme/nvme_ctrlr.c 00:05:47.856 Processing file lib/nvme/nvme_poll_group.c 00:05:47.856 Processing file lib/nvme/nvme_ns_cmd.c 00:05:47.856 Processing file lib/nvme/nvme_tcp.c 00:05:47.856 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:05:47.856 Processing file lib/nvme/nvme_discovery.c 00:05:47.856 Processing file lib/nvme/nvme_vfio_user.c 00:05:47.856 Processing file lib/nvme/nvme_fabric.c 00:05:47.856 Processing file lib/nvme/nvme_opal.c 00:05:47.856 Processing file lib/nvme/nvme_transport.c 00:05:47.856 Processing file lib/nvme/nvme_ns.c 00:05:47.856 Processing file lib/nvme/nvme_pcie_common.c 00:05:47.856 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:05:47.856 Processing file lib/nvme/nvme_io_msg.c 00:05:47.856 Processing file lib/nvme/nvme_pcie_internal.h 00:05:47.856 Processing file lib/nvme/nvme.c 00:05:47.856 Processing file lib/nvme/nvme_pcie.c 00:05:47.856 Processing file lib/nvme/nvme_internal.h 00:05:47.856 Processing file lib/nvme/nvme_zns.c 00:05:47.856 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:05:47.856 Processing file lib/nvme/nvme_rdma.c 00:05:47.856 Processing file lib/nvme/nvme_qpair.c 00:05:47.856 Processing file lib/nvme/nvme_quirks.c 00:05:48.116 Processing file lib/nvmf/nvmf.c 00:05:48.116 Processing file lib/nvmf/nvmf_internal.h 00:05:48.116 Processing file lib/nvmf/nvmf_rpc.c 00:05:48.116 Processing file lib/nvmf/ctrlr.c 00:05:48.116 Processing file lib/nvmf/subsystem.c 00:05:48.116 Processing file lib/nvmf/tcp.c 00:05:48.116 Processing file lib/nvmf/transport.c 00:05:48.116 Processing file lib/nvmf/ctrlr_bdev.c 00:05:48.116 Processing file lib/nvmf/rdma.c 00:05:48.116 Processing file lib/nvmf/ctrlr_discovery.c 00:05:48.375 Processing file lib/rdma/common.c 00:05:48.375 Processing file lib/rdma/rdma_verbs.c 00:05:48.375 Processing file lib/rpc/rpc.c 00:05:48.375 Processing file lib/scsi/port.c 00:05:48.375 Processing file lib/scsi/scsi_bdev.c 00:05:48.375 Processing file lib/scsi/lun.c 00:05:48.375 Processing file lib/scsi/scsi_pr.c 00:05:48.375 Processing file lib/scsi/task.c 00:05:48.375 Processing file lib/scsi/dev.c 00:05:48.375 Processing file lib/scsi/scsi.c 00:05:48.375 Processing file lib/scsi/scsi_rpc.c 00:05:48.633 Processing file lib/sock/sock_rpc.c 00:05:48.633 Processing file lib/sock/sock.c 00:05:48.633 Processing file lib/thread/thread.c 00:05:48.633 Processing file lib/thread/iobuf.c 00:05:48.633 Processing file lib/trace/trace_rpc.c 00:05:48.633 Processing file lib/trace/trace_flags.c 00:05:48.633 Processing file lib/trace/trace.c 00:05:48.893 Processing file lib/trace_parser/trace.cpp 00:05:48.893 Processing file lib/ut/ut.c 00:05:48.893 Processing file lib/ut_mock/mock.c 00:05:49.461 Processing file lib/util/string.c 00:05:49.461 Processing file lib/util/strerror_tls.c 00:05:49.461 Processing file lib/util/hexlify.c 00:05:49.461 Processing file lib/util/uuid.c 00:05:49.461 Processing file lib/util/fd_group.c 00:05:49.461 Processing file lib/util/crc16.c 00:05:49.461 Processing file lib/util/xor.c 00:05:49.461 Processing file lib/util/math.c 00:05:49.461 Processing file lib/util/dif.c 00:05:49.461 Processing file lib/util/bit_array.c 00:05:49.461 Processing file lib/util/fd.c 00:05:49.461 Processing file lib/util/iov.c 00:05:49.461 Processing file lib/util/crc64.c 00:05:49.461 Processing file lib/util/cpuset.c 00:05:49.461 Processing file lib/util/zipf.c 00:05:49.461 Processing file lib/util/crc32.c 00:05:49.461 Processing file lib/util/crc32c.c 00:05:49.461 Processing file lib/util/crc32_ieee.c 00:05:49.461 Processing file lib/util/file.c 00:05:49.461 Processing file lib/util/pipe.c 00:05:49.461 Processing file lib/util/base64.c 00:05:49.461 Processing file lib/vfio_user/host/vfio_user_pci.c 00:05:49.461 Processing file lib/vfio_user/host/vfio_user.c 00:05:49.461 Processing file lib/vhost/rte_vhost_user.c 00:05:49.461 Processing file lib/vhost/vhost_rpc.c 00:05:49.461 Processing file lib/vhost/vhost_blk.c 00:05:49.461 Processing file lib/vhost/vhost_scsi.c 00:05:49.461 Processing file lib/vhost/vhost.c 00:05:49.461 Processing file lib/vhost/vhost_internal.h 00:05:49.719 Processing file lib/virtio/virtio_vfio_user.c 00:05:49.719 Processing file lib/virtio/virtio.c 00:05:49.719 Processing file lib/virtio/virtio_pci.c 00:05:49.719 Processing file lib/virtio/virtio_vhost_user.c 00:05:49.719 Processing file lib/vmd/vmd.c 00:05:49.719 Processing file lib/vmd/led.c 00:05:49.978 Processing file module/accel/dsa/accel_dsa.c 00:05:49.978 Processing file module/accel/dsa/accel_dsa_rpc.c 00:05:49.978 Processing file module/accel/error/accel_error_rpc.c 00:05:49.978 Processing file module/accel/error/accel_error.c 00:05:49.978 Processing file module/accel/iaa/accel_iaa.c 00:05:49.978 Processing file module/accel/iaa/accel_iaa_rpc.c 00:05:49.978 Processing file module/accel/ioat/accel_ioat.c 00:05:49.978 Processing file module/accel/ioat/accel_ioat_rpc.c 00:05:50.237 Processing file module/bdev/aio/bdev_aio.c 00:05:50.237 Processing file module/bdev/aio/bdev_aio_rpc.c 00:05:50.237 Processing file module/bdev/daos/bdev_daos_rpc.c 00:05:50.237 Processing file module/bdev/daos/bdev_daos.c 00:05:50.237 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:05:50.237 Processing file module/bdev/delay/vbdev_delay.c 00:05:50.496 Processing file module/bdev/error/vbdev_error_rpc.c 00:05:50.496 Processing file module/bdev/error/vbdev_error.c 00:05:50.496 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:05:50.496 Processing file module/bdev/ftl/bdev_ftl.c 00:05:50.496 Processing file module/bdev/gpt/vbdev_gpt.c 00:05:50.496 Processing file module/bdev/gpt/gpt.c 00:05:50.496 Processing file module/bdev/gpt/gpt.h 00:05:50.756 Processing file module/bdev/lvol/vbdev_lvol.c 00:05:50.756 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:05:50.756 Processing file module/bdev/malloc/bdev_malloc.c 00:05:50.756 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:05:50.756 Processing file module/bdev/null/bdev_null_rpc.c 00:05:50.756 Processing file module/bdev/null/bdev_null.c 00:05:51.015 Processing file module/bdev/nvme/bdev_mdns_client.c 00:05:51.015 Processing file module/bdev/nvme/bdev_nvme.c 00:05:51.015 Processing file module/bdev/nvme/vbdev_opal.c 00:05:51.015 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:05:51.015 Processing file module/bdev/nvme/nvme_rpc.c 00:05:51.015 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:05:51.015 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:05:51.275 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:05:51.275 Processing file module/bdev/passthru/vbdev_passthru.c 00:05:51.275 Processing file module/bdev/raid/raid0.c 00:05:51.275 Processing file module/bdev/raid/bdev_raid_rpc.c 00:05:51.275 Processing file module/bdev/raid/bdev_raid.h 00:05:51.275 Processing file module/bdev/raid/concat.c 00:05:51.275 Processing file module/bdev/raid/raid1.c 00:05:51.275 Processing file module/bdev/raid/bdev_raid_sb.c 00:05:51.275 Processing file module/bdev/raid/bdev_raid.c 00:05:51.534 Processing file module/bdev/split/vbdev_split.c 00:05:51.534 Processing file module/bdev/split/vbdev_split_rpc.c 00:05:51.534 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:05:51.534 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:05:51.534 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:05:51.792 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:05:51.792 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:05:51.792 Processing file module/blob/bdev/blob_bdev.c 00:05:51.792 Processing file module/blobfs/bdev/blobfs_bdev.c 00:05:51.792 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:05:51.792 Processing file module/env_dpdk/env_dpdk_rpc.c 00:05:52.051 Processing file module/event/subsystems/accel/accel.c 00:05:52.051 Processing file module/event/subsystems/bdev/bdev.c 00:05:52.051 Processing file module/event/subsystems/iobuf/iobuf.c 00:05:52.051 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:05:52.051 Processing file module/event/subsystems/iscsi/iscsi.c 00:05:52.051 Processing file module/event/subsystems/nbd/nbd.c 00:05:52.310 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:05:52.310 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:05:52.310 Processing file module/event/subsystems/scheduler/scheduler.c 00:05:52.310 Processing file module/event/subsystems/scsi/scsi.c 00:05:52.569 Processing file module/event/subsystems/sock/sock.c 00:05:52.569 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:05:52.569 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:05:52.569 Processing file module/event/subsystems/vmd/vmd.c 00:05:52.569 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:05:52.569 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:05:52.828 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:05:52.828 Processing file module/scheduler/gscheduler/gscheduler.c 00:05:52.828 Processing file module/sock/sock_kernel.h 00:05:53.087 Processing file module/sock/posix/posix.c 00:05:53.087 Writing directory view page. 00:05:53.087 Overall coverage rate: 00:05:53.087 lines......: 38.7% (38484 of 99494 lines) 00:05:53.087 functions..: 42.4% (3524 of 8317 functions) 00:05:53.087 00:05:53.087 00:05:53.087 ===================== 00:05:53.087 All unit tests passed 00:05:53.087 ===================== 00:05:53.087 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:53.087 20:32:36 -- unit/unittest.sh@302 -- # set +x 00:05:53.087 00:05:53.087 00:05:53.087 ************************************ 00:05:53.087 END TEST unittest 00:05:53.087 ************************************ 00:05:53.087 00:05:53.087 real 2m11.104s 00:05:53.087 user 1m49.083s 00:05:53.087 sys 0m13.354s 00:05:53.087 20:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.087 20:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:53.087 20:32:36 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:53.087 20:32:36 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:53.087 20:32:36 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:53.087 20:32:36 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:53.087 20:32:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:53.087 20:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:53.087 20:32:36 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:53.087 20:32:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.087 20:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.087 20:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:53.087 ************************************ 00:05:53.087 START TEST env 00:05:53.087 ************************************ 00:05:53.087 20:32:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:53.087 * Looking for test storage... 00:05:53.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:53.087 20:32:36 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:53.087 20:32:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.088 20:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.088 20:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:53.088 ************************************ 00:05:53.088 START TEST env_memory 00:05:53.088 ************************************ 00:05:53.088 20:32:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:53.347 00:05:53.347 00:05:53.347 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.347 http://cunit.sourceforge.net/ 00:05:53.347 00:05:53.347 00:05:53.347 Suite: memory 00:05:53.347 Test: alloc and free memory map ...[2024-04-15 20:32:36.615158] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:53.347 passed 00:05:53.347 Test: mem map translation ...[2024-04-15 20:32:36.638006] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:53.347 [2024-04-15 20:32:36.638100] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:53.347 [2024-04-15 20:32:36.638154] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:53.347 [2024-04-15 20:32:36.638207] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:53.347 passed 00:05:53.347 Test: mem map registration ...[2024-04-15 20:32:36.666863] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:53.347 [2024-04-15 20:32:36.666951] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:53.347 passed 00:05:53.347 Test: mem map adjacent registrations ...passed 00:05:53.347 00:05:53.347 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.347 suites 1 1 n/a 0 0 00:05:53.347 tests 4 4 4 0 0 00:05:53.347 asserts 152 152 152 0 n/a 00:05:53.347 00:05:53.347 Elapsed time = 0.110 seconds 00:05:53.347 ************************************ 00:05:53.347 END TEST env_memory 00:05:53.347 ************************************ 00:05:53.347 00:05:53.347 real 0m0.140s 00:05:53.347 user 0m0.118s 00:05:53.347 sys 0m0.022s 00:05:53.347 20:32:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.347 20:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:53.347 20:32:36 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:53.347 20:32:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.347 20:32:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.347 20:32:36 -- common/autotest_common.sh@10 -- # set +x 00:05:53.347 ************************************ 00:05:53.347 START TEST env_vtophys 00:05:53.347 ************************************ 00:05:53.347 20:32:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:53.606 EAL: lib.eal log level changed from notice to debug 00:05:53.606 EAL: Detected lcore 0 as core 0 on socket 0 00:05:53.606 EAL: Detected lcore 1 as core 0 on socket 0 00:05:53.606 EAL: Detected lcore 2 as core 0 on socket 0 00:05:53.606 EAL: Detected lcore 3 as core 0 on socket 0 00:05:53.606 EAL: Detected lcore 4 as core 0 on socket 0 00:05:53.606 EAL: Detected lcore 5 as core 0 on socket 0 00:05:53.606 EAL: Detected lcore 6 as core 0 on socket 0 00:05:53.606 EAL: Detected lcore 7 as core 0 on socket 0 00:05:53.606 EAL: Detected lcore 8 as core 0 on socket 0 00:05:53.606 EAL: Detected lcore 9 as core 0 on socket 0 00:05:53.606 EAL: Maximum logical cores by configuration: 128 00:05:53.606 EAL: Detected CPU lcores: 10 00:05:53.606 EAL: Detected NUMA nodes: 1 00:05:53.606 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:53.606 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:53.606 EAL: Checking presence of .so 'librte_eal.so' 00:05:53.606 EAL: Detected static linkage of DPDK 00:05:53.606 EAL: No shared files mode enabled, IPC will be disabled 00:05:53.606 EAL: Selected IOVA mode 'PA' 00:05:53.606 EAL: Probing VFIO support... 00:05:53.606 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:53.606 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:53.606 EAL: Ask a virtual area of 0x2e000 bytes 00:05:53.606 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:53.606 EAL: Setting up physically contiguous memory... 00:05:53.606 EAL: Setting maximum number of open files to 4096 00:05:53.606 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:53.606 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:53.606 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.606 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:53.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.606 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.606 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:53.606 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:53.606 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.606 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:53.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.606 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.606 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:53.606 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:53.606 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.606 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:53.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.606 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.606 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:53.606 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:53.606 EAL: Ask a virtual area of 0x61000 bytes 00:05:53.606 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:53.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:53.606 EAL: Ask a virtual area of 0x400000000 bytes 00:05:53.606 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:53.606 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:53.606 EAL: Hugepages will be freed exactly as allocated. 00:05:53.606 EAL: No shared files mode enabled, IPC is disabled 00:05:53.606 EAL: No shared files mode enabled, IPC is disabled 00:05:53.606 EAL: TSC frequency is ~2490000 KHz 00:05:53.606 EAL: Main lcore 0 is ready (tid=7f6278542180;cpuset=[0]) 00:05:53.606 EAL: Trying to obtain current memory policy. 00:05:53.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.606 EAL: Restoring previous memory policy: 0 00:05:53.606 EAL: request: mp_malloc_sync 00:05:53.606 EAL: No shared files mode enabled, IPC is disabled 00:05:53.606 EAL: Heap on socket 0 was expanded by 2MB 00:05:53.606 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:53.606 EAL: Mem event callback 'spdk:(nil)' registered 00:05:53.606 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:53.606 00:05:53.606 00:05:53.606 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.606 http://cunit.sourceforge.net/ 00:05:53.606 00:05:53.606 00:05:53.606 Suite: components_suite 00:05:54.174 Test: vtophys_malloc_test ...passed 00:05:54.174 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:54.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.174 EAL: Restoring previous memory policy: 0 00:05:54.174 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.174 EAL: request: mp_malloc_sync 00:05:54.174 EAL: No shared files mode enabled, IPC is disabled 00:05:54.174 EAL: Heap on socket 0 was expanded by 4MB 00:05:54.174 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.174 EAL: request: mp_malloc_sync 00:05:54.174 EAL: No shared files mode enabled, IPC is disabled 00:05:54.174 EAL: Heap on socket 0 was shrunk by 4MB 00:05:54.174 EAL: Trying to obtain current memory policy. 00:05:54.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.174 EAL: Restoring previous memory policy: 0 00:05:54.174 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.174 EAL: request: mp_malloc_sync 00:05:54.174 EAL: No shared files mode enabled, IPC is disabled 00:05:54.174 EAL: Heap on socket 0 was expanded by 6MB 00:05:54.174 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.174 EAL: request: mp_malloc_sync 00:05:54.174 EAL: No shared files mode enabled, IPC is disabled 00:05:54.174 EAL: Heap on socket 0 was shrunk by 6MB 00:05:54.174 EAL: Trying to obtain current memory policy. 00:05:54.174 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.174 EAL: Restoring previous memory policy: 0 00:05:54.174 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.174 EAL: request: mp_malloc_sync 00:05:54.174 EAL: No shared files mode enabled, IPC is disabled 00:05:54.174 EAL: Heap on socket 0 was expanded by 10MB 00:05:54.174 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.175 EAL: request: mp_malloc_sync 00:05:54.175 EAL: No shared files mode enabled, IPC is disabled 00:05:54.175 EAL: Heap on socket 0 was shrunk by 10MB 00:05:54.175 EAL: Trying to obtain current memory policy. 00:05:54.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.175 EAL: Restoring previous memory policy: 0 00:05:54.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.175 EAL: request: mp_malloc_sync 00:05:54.175 EAL: No shared files mode enabled, IPC is disabled 00:05:54.175 EAL: Heap on socket 0 was expanded by 18MB 00:05:54.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.175 EAL: request: mp_malloc_sync 00:05:54.175 EAL: No shared files mode enabled, IPC is disabled 00:05:54.175 EAL: Heap on socket 0 was shrunk by 18MB 00:05:54.175 EAL: Trying to obtain current memory policy. 00:05:54.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.175 EAL: Restoring previous memory policy: 0 00:05:54.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.175 EAL: request: mp_malloc_sync 00:05:54.175 EAL: No shared files mode enabled, IPC is disabled 00:05:54.175 EAL: Heap on socket 0 was expanded by 34MB 00:05:54.175 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.175 EAL: request: mp_malloc_sync 00:05:54.175 EAL: No shared files mode enabled, IPC is disabled 00:05:54.175 EAL: Heap on socket 0 was shrunk by 34MB 00:05:54.433 EAL: Trying to obtain current memory policy. 00:05:54.433 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.433 EAL: Restoring previous memory policy: 0 00:05:54.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.433 EAL: request: mp_malloc_sync 00:05:54.433 EAL: No shared files mode enabled, IPC is disabled 00:05:54.433 EAL: Heap on socket 0 was expanded by 66MB 00:05:54.433 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.433 EAL: request: mp_malloc_sync 00:05:54.433 EAL: No shared files mode enabled, IPC is disabled 00:05:54.433 EAL: Heap on socket 0 was shrunk by 66MB 00:05:54.691 EAL: Trying to obtain current memory policy. 00:05:54.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.691 EAL: Restoring previous memory policy: 0 00:05:54.691 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.691 EAL: request: mp_malloc_sync 00:05:54.691 EAL: No shared files mode enabled, IPC is disabled 00:05:54.691 EAL: Heap on socket 0 was expanded by 130MB 00:05:54.949 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.949 EAL: request: mp_malloc_sync 00:05:54.949 EAL: No shared files mode enabled, IPC is disabled 00:05:54.949 EAL: Heap on socket 0 was shrunk by 130MB 00:05:55.208 EAL: Trying to obtain current memory policy. 00:05:55.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.208 EAL: Restoring previous memory policy: 0 00:05:55.208 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.208 EAL: request: mp_malloc_sync 00:05:55.208 EAL: No shared files mode enabled, IPC is disabled 00:05:55.208 EAL: Heap on socket 0 was expanded by 258MB 00:05:55.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.776 EAL: request: mp_malloc_sync 00:05:55.776 EAL: No shared files mode enabled, IPC is disabled 00:05:55.776 EAL: Heap on socket 0 was shrunk by 258MB 00:05:56.035 EAL: Trying to obtain current memory policy. 00:05:56.035 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:56.294 EAL: Restoring previous memory policy: 0 00:05:56.294 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.294 EAL: request: mp_malloc_sync 00:05:56.294 EAL: No shared files mode enabled, IPC is disabled 00:05:56.294 EAL: Heap on socket 0 was expanded by 514MB 00:05:57.231 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.231 EAL: request: mp_malloc_sync 00:05:57.231 EAL: No shared files mode enabled, IPC is disabled 00:05:57.231 EAL: Heap on socket 0 was shrunk by 514MB 00:05:58.168 EAL: Trying to obtain current memory policy. 00:05:58.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.168 EAL: Restoring previous memory policy: 0 00:05:58.168 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.168 EAL: request: mp_malloc_sync 00:05:58.168 EAL: No shared files mode enabled, IPC is disabled 00:05:58.168 EAL: Heap on socket 0 was expanded by 1026MB 00:06:00.082 EAL: Calling mem event callback 'spdk:(nil)' 00:06:00.341 EAL: request: mp_malloc_sync 00:06:00.341 EAL: No shared files mode enabled, IPC is disabled 00:06:00.341 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:02.246 passed 00:06:02.246 00:06:02.246 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.246 suites 1 1 n/a 0 0 00:06:02.246 tests 2 2 2 0 0 00:06:02.246 asserts 6713 6713 6713 0 n/a 00:06:02.246 00:06:02.246 Elapsed time = 8.270 seconds 00:06:02.246 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.246 EAL: request: mp_malloc_sync 00:06:02.246 EAL: No shared files mode enabled, IPC is disabled 00:06:02.246 EAL: Heap on socket 0 was shrunk by 2MB 00:06:02.246 EAL: No shared files mode enabled, IPC is disabled 00:06:02.246 EAL: No shared files mode enabled, IPC is disabled 00:06:02.247 EAL: No shared files mode enabled, IPC is disabled 00:06:02.247 00:06:02.247 real 0m8.621s 00:06:02.247 user 0m7.537s 00:06:02.247 sys 0m0.865s 00:06:02.247 20:32:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.247 20:32:45 -- common/autotest_common.sh@10 -- # set +x 00:06:02.247 ************************************ 00:06:02.247 END TEST env_vtophys 00:06:02.247 ************************************ 00:06:02.247 20:32:45 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:02.247 20:32:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.247 20:32:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.247 20:32:45 -- common/autotest_common.sh@10 -- # set +x 00:06:02.247 ************************************ 00:06:02.247 START TEST env_pci 00:06:02.247 ************************************ 00:06:02.247 20:32:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:02.247 00:06:02.247 00:06:02.247 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.247 http://cunit.sourceforge.net/ 00:06:02.247 00:06:02.247 00:06:02.247 Suite: pci 00:06:02.247 Test: pci_hook ...[2024-04-15 20:32:45.489257] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 39082 has claimed it 00:06:02.247 passed 00:06:02.247 00:06:02.247 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.247 suites 1 1 n/a 0 0 00:06:02.247 tests 1 1 1 0 0 00:06:02.247 asserts 25 25 25 0 n/a 00:06:02.247 00:06:02.247 Elapsed time = 0.010 seconds 00:06:02.247 EAL: Cannot find device (10000:00:01.0) 00:06:02.247 EAL: Failed to attach device on primary process 00:06:02.247 00:06:02.247 real 0m0.092s 00:06:02.247 user 0m0.043s 00:06:02.247 sys 0m0.050s 00:06:02.247 20:32:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.247 ************************************ 00:06:02.247 END TEST env_pci 00:06:02.247 ************************************ 00:06:02.247 20:32:45 -- common/autotest_common.sh@10 -- # set +x 00:06:02.247 20:32:45 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:02.247 20:32:45 -- env/env.sh@15 -- # uname 00:06:02.247 20:32:45 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:02.247 20:32:45 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:02.247 20:32:45 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:02.247 20:32:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:06:02.247 20:32:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.247 20:32:45 -- common/autotest_common.sh@10 -- # set +x 00:06:02.247 ************************************ 00:06:02.247 START TEST env_dpdk_post_init 00:06:02.247 ************************************ 00:06:02.247 20:32:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:02.506 EAL: Detected CPU lcores: 10 00:06:02.506 EAL: Detected NUMA nodes: 1 00:06:02.506 EAL: Detected static linkage of DPDK 00:06:02.506 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:02.506 EAL: Selected IOVA mode 'PA' 00:06:02.506 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:02.506 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket 0) 00:06:02.506 Starting DPDK initialization... 00:06:02.506 Starting SPDK post initialization... 00:06:02.506 SPDK NVMe probe 00:06:02.506 Attaching to 0000:00:06.0 00:06:02.506 Attached to 0000:00:06.0 00:06:02.506 Cleaning up... 00:06:02.506 00:06:02.506 real 0m0.325s 00:06:02.506 user 0m0.060s 00:06:02.506 sys 0m0.072s 00:06:02.506 20:32:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.506 ************************************ 00:06:02.506 END TEST env_dpdk_post_init 00:06:02.506 ************************************ 00:06:02.507 20:32:45 -- common/autotest_common.sh@10 -- # set +x 00:06:02.507 20:32:45 -- env/env.sh@26 -- # uname 00:06:02.507 20:32:45 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:02.507 20:32:45 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:02.507 20:32:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.507 20:32:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.507 20:32:45 -- common/autotest_common.sh@10 -- # set +x 00:06:02.507 ************************************ 00:06:02.507 START TEST env_mem_callbacks 00:06:02.507 ************************************ 00:06:02.507 20:32:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:02.765 EAL: Detected CPU lcores: 10 00:06:02.765 EAL: Detected NUMA nodes: 1 00:06:02.765 EAL: Detected static linkage of DPDK 00:06:02.765 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:02.765 EAL: Selected IOVA mode 'PA' 00:06:02.765 00:06:02.765 00:06:02.765 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.765 http://cunit.sourceforge.net/ 00:06:02.765 00:06:02.765 00:06:02.765 Suite: memory 00:06:02.765 Test: test ... 00:06:02.765 register 0x200000200000 2097152 00:06:02.765 malloc 3145728 00:06:02.765 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:02.765 register 0x200000400000 4194304 00:06:02.765 buf 0x2000004fffc0 len 3145728 PASSED 00:06:02.765 malloc 64 00:06:02.765 buf 0x2000004ffec0 len 64 PASSED 00:06:02.765 malloc 4194304 00:06:02.765 register 0x200000800000 6291456 00:06:02.765 buf 0x2000009fffc0 len 4194304 PASSED 00:06:02.765 free 0x2000004fffc0 3145728 00:06:02.765 free 0x2000004ffec0 64 00:06:02.765 unregister 0x200000400000 4194304 PASSED 00:06:02.765 free 0x2000009fffc0 4194304 00:06:02.765 unregister 0x200000800000 6291456 PASSED 00:06:02.765 malloc 8388608 00:06:02.765 register 0x200000400000 10485760 00:06:02.765 buf 0x2000005fffc0 len 8388608 PASSED 00:06:02.765 free 0x2000005fffc0 8388608 00:06:03.025 unregister 0x200000400000 10485760 PASSED 00:06:03.025 passed 00:06:03.025 00:06:03.025 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.025 suites 1 1 n/a 0 0 00:06:03.025 tests 1 1 1 0 0 00:06:03.025 asserts 15 15 15 0 n/a 00:06:03.025 00:06:03.025 Elapsed time = 0.090 seconds 00:06:03.025 ************************************ 00:06:03.025 END TEST env_mem_callbacks 00:06:03.025 ************************************ 00:06:03.025 00:06:03.025 real 0m0.292s 00:06:03.025 user 0m0.122s 00:06:03.025 sys 0m0.063s 00:06:03.025 20:32:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.025 20:32:46 -- common/autotest_common.sh@10 -- # set +x 00:06:03.025 00:06:03.025 real 0m9.884s 00:06:03.025 user 0m8.029s 00:06:03.025 sys 0m1.320s 00:06:03.025 20:32:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.025 20:32:46 -- common/autotest_common.sh@10 -- # set +x 00:06:03.025 ************************************ 00:06:03.025 END TEST env 00:06:03.025 ************************************ 00:06:03.025 20:32:46 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:03.025 20:32:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.025 20:32:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.025 20:32:46 -- common/autotest_common.sh@10 -- # set +x 00:06:03.025 ************************************ 00:06:03.025 START TEST rpc 00:06:03.025 ************************************ 00:06:03.025 20:32:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:03.025 * Looking for test storage... 00:06:03.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:03.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.025 20:32:46 -- rpc/rpc.sh@65 -- # spdk_pid=39223 00:06:03.025 20:32:46 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.025 20:32:46 -- rpc/rpc.sh@67 -- # waitforlisten 39223 00:06:03.025 20:32:46 -- common/autotest_common.sh@819 -- # '[' -z 39223 ']' 00:06:03.025 20:32:46 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:03.025 20:32:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.025 20:32:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.025 20:32:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.025 20:32:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.025 20:32:46 -- common/autotest_common.sh@10 -- # set +x 00:06:03.284 [2024-04-15 20:32:46.667787] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:03.284 [2024-04-15 20:32:46.667947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39223 ] 00:06:03.544 [2024-04-15 20:32:46.838145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.544 [2024-04-15 20:32:47.014280] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.544 [2024-04-15 20:32:47.014462] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:03.544 [2024-04-15 20:32:47.014496] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 39223' to capture a snapshot of events at runtime. 00:06:03.544 [2024-04-15 20:32:47.014514] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid39223 for offline analysis/debug. 00:06:03.544 [2024-04-15 20:32:47.014569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.922 20:32:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.922 20:32:48 -- common/autotest_common.sh@852 -- # return 0 00:06:04.922 20:32:48 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:04.922 20:32:48 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:04.922 20:32:48 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:04.922 20:32:48 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:04.922 20:32:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.922 20:32:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.922 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.922 ************************************ 00:06:04.922 START TEST rpc_integrity 00:06:04.922 ************************************ 00:06:04.922 20:32:48 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:04.922 20:32:48 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:04.922 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.922 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.922 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.922 20:32:48 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:04.922 20:32:48 -- rpc/rpc.sh@13 -- # jq length 00:06:04.922 20:32:48 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:04.922 20:32:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:04.922 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.922 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.922 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.922 20:32:48 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:04.922 20:32:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:04.922 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.922 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.922 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.922 20:32:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:04.922 { 00:06:04.922 "name": "Malloc0", 00:06:04.922 "aliases": [ 00:06:04.922 "c9d10a34-94a0-413f-a3d4-060c247c4f0a" 00:06:04.922 ], 00:06:04.922 "product_name": "Malloc disk", 00:06:04.922 "block_size": 512, 00:06:04.922 "num_blocks": 16384, 00:06:04.922 "uuid": "c9d10a34-94a0-413f-a3d4-060c247c4f0a", 00:06:04.922 "assigned_rate_limits": { 00:06:04.922 "rw_ios_per_sec": 0, 00:06:04.922 "rw_mbytes_per_sec": 0, 00:06:04.922 "r_mbytes_per_sec": 0, 00:06:04.922 "w_mbytes_per_sec": 0 00:06:04.922 }, 00:06:04.922 "claimed": false, 00:06:04.922 "zoned": false, 00:06:04.922 "supported_io_types": { 00:06:04.922 "read": true, 00:06:04.922 "write": true, 00:06:04.922 "unmap": true, 00:06:04.922 "write_zeroes": true, 00:06:04.922 "flush": true, 00:06:04.922 "reset": true, 00:06:04.922 "compare": false, 00:06:04.922 "compare_and_write": false, 00:06:04.922 "abort": true, 00:06:04.922 "nvme_admin": false, 00:06:04.922 "nvme_io": false 00:06:04.922 }, 00:06:04.922 "memory_domains": [ 00:06:04.922 { 00:06:04.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.922 "dma_device_type": 2 00:06:04.922 } 00:06:04.922 ], 00:06:04.922 "driver_specific": {} 00:06:04.922 } 00:06:04.922 ]' 00:06:04.922 20:32:48 -- rpc/rpc.sh@17 -- # jq length 00:06:04.922 20:32:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:04.922 20:32:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:04.922 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.922 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.922 [2024-04-15 20:32:48.204189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:04.922 [2024-04-15 20:32:48.204254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.922 [2024-04-15 20:32:48.204306] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028280 00:06:04.922 [2024-04-15 20:32:48.204332] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.922 [2024-04-15 20:32:48.206003] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.922 [2024-04-15 20:32:48.206056] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:04.922 Passthru0 00:06:04.922 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.922 20:32:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:04.922 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.922 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.922 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.922 20:32:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.922 { 00:06:04.922 "name": "Malloc0", 00:06:04.922 "aliases": [ 00:06:04.922 "c9d10a34-94a0-413f-a3d4-060c247c4f0a" 00:06:04.922 ], 00:06:04.922 "product_name": "Malloc disk", 00:06:04.922 "block_size": 512, 00:06:04.922 "num_blocks": 16384, 00:06:04.922 "uuid": "c9d10a34-94a0-413f-a3d4-060c247c4f0a", 00:06:04.922 "assigned_rate_limits": { 00:06:04.922 "rw_ios_per_sec": 0, 00:06:04.922 "rw_mbytes_per_sec": 0, 00:06:04.922 "r_mbytes_per_sec": 0, 00:06:04.922 "w_mbytes_per_sec": 0 00:06:04.922 }, 00:06:04.922 "claimed": true, 00:06:04.922 "claim_type": "exclusive_write", 00:06:04.922 "zoned": false, 00:06:04.922 "supported_io_types": { 00:06:04.922 "read": true, 00:06:04.922 "write": true, 00:06:04.922 "unmap": true, 00:06:04.922 "write_zeroes": true, 00:06:04.922 "flush": true, 00:06:04.922 "reset": true, 00:06:04.922 "compare": false, 00:06:04.922 "compare_and_write": false, 00:06:04.922 "abort": true, 00:06:04.922 "nvme_admin": false, 00:06:04.922 "nvme_io": false 00:06:04.922 }, 00:06:04.922 "memory_domains": [ 00:06:04.923 { 00:06:04.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.923 "dma_device_type": 2 00:06:04.923 } 00:06:04.923 ], 00:06:04.923 "driver_specific": {} 00:06:04.923 }, 00:06:04.923 { 00:06:04.923 "name": "Passthru0", 00:06:04.923 "aliases": [ 00:06:04.923 "e929261d-be4d-5275-b418-f9323897e94b" 00:06:04.923 ], 00:06:04.923 "product_name": "passthru", 00:06:04.923 "block_size": 512, 00:06:04.923 "num_blocks": 16384, 00:06:04.923 "uuid": "e929261d-be4d-5275-b418-f9323897e94b", 00:06:04.923 "assigned_rate_limits": { 00:06:04.923 "rw_ios_per_sec": 0, 00:06:04.923 "rw_mbytes_per_sec": 0, 00:06:04.923 "r_mbytes_per_sec": 0, 00:06:04.923 "w_mbytes_per_sec": 0 00:06:04.923 }, 00:06:04.923 "claimed": false, 00:06:04.923 "zoned": false, 00:06:04.923 "supported_io_types": { 00:06:04.923 "read": true, 00:06:04.923 "write": true, 00:06:04.923 "unmap": true, 00:06:04.923 "write_zeroes": true, 00:06:04.923 "flush": true, 00:06:04.923 "reset": true, 00:06:04.923 "compare": false, 00:06:04.923 "compare_and_write": false, 00:06:04.923 "abort": true, 00:06:04.923 "nvme_admin": false, 00:06:04.923 "nvme_io": false 00:06:04.923 }, 00:06:04.923 "memory_domains": [ 00:06:04.923 { 00:06:04.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.923 "dma_device_type": 2 00:06:04.923 } 00:06:04.923 ], 00:06:04.923 "driver_specific": { 00:06:04.923 "passthru": { 00:06:04.923 "name": "Passthru0", 00:06:04.923 "base_bdev_name": "Malloc0" 00:06:04.923 } 00:06:04.923 } 00:06:04.923 } 00:06:04.923 ]' 00:06:04.923 20:32:48 -- rpc/rpc.sh@21 -- # jq length 00:06:04.923 20:32:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.923 20:32:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.923 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.923 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.923 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.923 20:32:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:04.923 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.923 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.923 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.923 20:32:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:04.923 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.923 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.923 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.923 20:32:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:04.923 20:32:48 -- rpc/rpc.sh@26 -- # jq length 00:06:04.923 20:32:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:04.923 00:06:04.923 real 0m0.343s 00:06:04.923 user 0m0.211s 00:06:04.923 sys 0m0.044s 00:06:04.923 20:32:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.923 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:04.923 ************************************ 00:06:04.923 END TEST rpc_integrity 00:06:04.923 ************************************ 00:06:05.182 20:32:48 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:05.182 20:32:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.182 20:32:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.182 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.182 ************************************ 00:06:05.182 START TEST rpc_plugins 00:06:05.182 ************************************ 00:06:05.182 20:32:48 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:06:05.182 20:32:48 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:05.182 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.182 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.182 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.182 20:32:48 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:05.182 20:32:48 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:05.182 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.182 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.182 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.182 20:32:48 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:05.182 { 00:06:05.182 "name": "Malloc1", 00:06:05.182 "aliases": [ 00:06:05.182 "f1fdee45-e4fc-4e7e-a04c-ddb007cf8602" 00:06:05.182 ], 00:06:05.182 "product_name": "Malloc disk", 00:06:05.182 "block_size": 4096, 00:06:05.182 "num_blocks": 256, 00:06:05.182 "uuid": "f1fdee45-e4fc-4e7e-a04c-ddb007cf8602", 00:06:05.182 "assigned_rate_limits": { 00:06:05.182 "rw_ios_per_sec": 0, 00:06:05.182 "rw_mbytes_per_sec": 0, 00:06:05.182 "r_mbytes_per_sec": 0, 00:06:05.182 "w_mbytes_per_sec": 0 00:06:05.182 }, 00:06:05.182 "claimed": false, 00:06:05.182 "zoned": false, 00:06:05.182 "supported_io_types": { 00:06:05.182 "read": true, 00:06:05.182 "write": true, 00:06:05.182 "unmap": true, 00:06:05.182 "write_zeroes": true, 00:06:05.182 "flush": true, 00:06:05.182 "reset": true, 00:06:05.182 "compare": false, 00:06:05.182 "compare_and_write": false, 00:06:05.182 "abort": true, 00:06:05.182 "nvme_admin": false, 00:06:05.182 "nvme_io": false 00:06:05.182 }, 00:06:05.182 "memory_domains": [ 00:06:05.182 { 00:06:05.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.182 "dma_device_type": 2 00:06:05.182 } 00:06:05.182 ], 00:06:05.182 "driver_specific": {} 00:06:05.182 } 00:06:05.182 ]' 00:06:05.182 20:32:48 -- rpc/rpc.sh@32 -- # jq length 00:06:05.182 20:32:48 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:05.182 20:32:48 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:05.182 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.182 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.182 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.182 20:32:48 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:05.182 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.182 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.182 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.182 20:32:48 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:05.182 20:32:48 -- rpc/rpc.sh@36 -- # jq length 00:06:05.182 ************************************ 00:06:05.182 END TEST rpc_plugins 00:06:05.182 ************************************ 00:06:05.182 20:32:48 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:05.182 00:06:05.182 real 0m0.167s 00:06:05.182 user 0m0.107s 00:06:05.182 sys 0m0.021s 00:06:05.182 20:32:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.182 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.182 20:32:48 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:05.182 20:32:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.182 20:32:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.182 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.182 ************************************ 00:06:05.182 START TEST rpc_trace_cmd_test 00:06:05.182 ************************************ 00:06:05.182 20:32:48 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:06:05.182 20:32:48 -- rpc/rpc.sh@40 -- # local info 00:06:05.182 20:32:48 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:05.182 20:32:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.182 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.441 20:32:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.441 20:32:48 -- rpc/rpc.sh@42 -- # info='{ 00:06:05.441 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid39223", 00:06:05.441 "tpoint_group_mask": "0x8", 00:06:05.441 "iscsi_conn": { 00:06:05.441 "mask": "0x2", 00:06:05.441 "tpoint_mask": "0x0" 00:06:05.441 }, 00:06:05.441 "scsi": { 00:06:05.441 "mask": "0x4", 00:06:05.441 "tpoint_mask": "0x0" 00:06:05.441 }, 00:06:05.441 "bdev": { 00:06:05.441 "mask": "0x8", 00:06:05.441 "tpoint_mask": "0xffffffffffffffff" 00:06:05.441 }, 00:06:05.441 "nvmf_rdma": { 00:06:05.441 "mask": "0x10", 00:06:05.441 "tpoint_mask": "0x0" 00:06:05.441 }, 00:06:05.441 "nvmf_tcp": { 00:06:05.441 "mask": "0x20", 00:06:05.441 "tpoint_mask": "0x0" 00:06:05.441 }, 00:06:05.441 "ftl": { 00:06:05.442 "mask": "0x40", 00:06:05.442 "tpoint_mask": "0x0" 00:06:05.442 }, 00:06:05.442 "blobfs": { 00:06:05.442 "mask": "0x80", 00:06:05.442 "tpoint_mask": "0x0" 00:06:05.442 }, 00:06:05.442 "dsa": { 00:06:05.442 "mask": "0x200", 00:06:05.442 "tpoint_mask": "0x0" 00:06:05.442 }, 00:06:05.442 "thread": { 00:06:05.442 "mask": "0x400", 00:06:05.442 "tpoint_mask": "0x0" 00:06:05.442 }, 00:06:05.442 "nvme_pcie": { 00:06:05.442 "mask": "0x800", 00:06:05.442 "tpoint_mask": "0x0" 00:06:05.442 }, 00:06:05.442 "iaa": { 00:06:05.442 "mask": "0x1000", 00:06:05.442 "tpoint_mask": "0x0" 00:06:05.442 }, 00:06:05.442 "nvme_tcp": { 00:06:05.442 "mask": "0x2000", 00:06:05.442 "tpoint_mask": "0x0" 00:06:05.442 }, 00:06:05.442 "bdev_nvme": { 00:06:05.442 "mask": "0x4000", 00:06:05.442 "tpoint_mask": "0x0" 00:06:05.442 } 00:06:05.442 }' 00:06:05.442 20:32:48 -- rpc/rpc.sh@43 -- # jq length 00:06:05.442 20:32:48 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:05.442 20:32:48 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:05.442 20:32:48 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:05.442 20:32:48 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:05.442 20:32:48 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:05.442 20:32:48 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:05.442 20:32:48 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:05.442 20:32:48 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:05.701 20:32:48 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:05.701 ************************************ 00:06:05.701 END TEST rpc_trace_cmd_test 00:06:05.701 ************************************ 00:06:05.701 00:06:05.701 real 0m0.299s 00:06:05.701 user 0m0.255s 00:06:05.701 sys 0m0.040s 00:06:05.701 20:32:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.701 20:32:48 -- common/autotest_common.sh@10 -- # set +x 00:06:05.701 20:32:49 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:05.701 20:32:49 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:05.701 20:32:49 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:05.701 20:32:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.701 20:32:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.701 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.701 ************************************ 00:06:05.701 START TEST rpc_daemon_integrity 00:06:05.701 ************************************ 00:06:05.701 20:32:49 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:06:05.701 20:32:49 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.701 20:32:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.701 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.701 20:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.701 20:32:49 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.701 20:32:49 -- rpc/rpc.sh@13 -- # jq length 00:06:05.701 20:32:49 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.701 20:32:49 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.701 20:32:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.701 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.701 20:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.701 20:32:49 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:05.701 20:32:49 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.701 20:32:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.701 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.701 20:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.701 20:32:49 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.701 { 00:06:05.701 "name": "Malloc2", 00:06:05.701 "aliases": [ 00:06:05.701 "5864c286-dc33-44ae-b2d8-ca0ad3c1eb9d" 00:06:05.701 ], 00:06:05.701 "product_name": "Malloc disk", 00:06:05.701 "block_size": 512, 00:06:05.701 "num_blocks": 16384, 00:06:05.701 "uuid": "5864c286-dc33-44ae-b2d8-ca0ad3c1eb9d", 00:06:05.701 "assigned_rate_limits": { 00:06:05.701 "rw_ios_per_sec": 0, 00:06:05.701 "rw_mbytes_per_sec": 0, 00:06:05.701 "r_mbytes_per_sec": 0, 00:06:05.701 "w_mbytes_per_sec": 0 00:06:05.701 }, 00:06:05.701 "claimed": false, 00:06:05.701 "zoned": false, 00:06:05.701 "supported_io_types": { 00:06:05.701 "read": true, 00:06:05.701 "write": true, 00:06:05.701 "unmap": true, 00:06:05.701 "write_zeroes": true, 00:06:05.701 "flush": true, 00:06:05.701 "reset": true, 00:06:05.701 "compare": false, 00:06:05.701 "compare_and_write": false, 00:06:05.701 "abort": true, 00:06:05.701 "nvme_admin": false, 00:06:05.701 "nvme_io": false 00:06:05.701 }, 00:06:05.701 "memory_domains": [ 00:06:05.701 { 00:06:05.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.701 "dma_device_type": 2 00:06:05.701 } 00:06:05.701 ], 00:06:05.701 "driver_specific": {} 00:06:05.701 } 00:06:05.701 ]' 00:06:05.701 20:32:49 -- rpc/rpc.sh@17 -- # jq length 00:06:05.701 20:32:49 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.701 20:32:49 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:05.701 20:32:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.701 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.961 [2024-04-15 20:32:49.205844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:05.961 [2024-04-15 20:32:49.205903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.961 [2024-04-15 20:32:49.205940] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002a680 00:06:05.961 [2024-04-15 20:32:49.205957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.961 [2024-04-15 20:32:49.207504] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.961 [2024-04-15 20:32:49.207565] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.961 Passthru0 00:06:05.961 20:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.961 20:32:49 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.961 20:32:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.961 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.961 20:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.961 20:32:49 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.961 { 00:06:05.961 "name": "Malloc2", 00:06:05.961 "aliases": [ 00:06:05.961 "5864c286-dc33-44ae-b2d8-ca0ad3c1eb9d" 00:06:05.961 ], 00:06:05.961 "product_name": "Malloc disk", 00:06:05.961 "block_size": 512, 00:06:05.961 "num_blocks": 16384, 00:06:05.961 "uuid": "5864c286-dc33-44ae-b2d8-ca0ad3c1eb9d", 00:06:05.961 "assigned_rate_limits": { 00:06:05.961 "rw_ios_per_sec": 0, 00:06:05.961 "rw_mbytes_per_sec": 0, 00:06:05.961 "r_mbytes_per_sec": 0, 00:06:05.961 "w_mbytes_per_sec": 0 00:06:05.961 }, 00:06:05.961 "claimed": true, 00:06:05.961 "claim_type": "exclusive_write", 00:06:05.961 "zoned": false, 00:06:05.961 "supported_io_types": { 00:06:05.961 "read": true, 00:06:05.961 "write": true, 00:06:05.961 "unmap": true, 00:06:05.961 "write_zeroes": true, 00:06:05.961 "flush": true, 00:06:05.961 "reset": true, 00:06:05.961 "compare": false, 00:06:05.961 "compare_and_write": false, 00:06:05.961 "abort": true, 00:06:05.961 "nvme_admin": false, 00:06:05.961 "nvme_io": false 00:06:05.961 }, 00:06:05.961 "memory_domains": [ 00:06:05.961 { 00:06:05.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.961 "dma_device_type": 2 00:06:05.961 } 00:06:05.961 ], 00:06:05.961 "driver_specific": {} 00:06:05.961 }, 00:06:05.961 { 00:06:05.961 "name": "Passthru0", 00:06:05.961 "aliases": [ 00:06:05.961 "00ec6d62-404a-50f5-96b9-f7eecc0f773f" 00:06:05.961 ], 00:06:05.961 "product_name": "passthru", 00:06:05.961 "block_size": 512, 00:06:05.961 "num_blocks": 16384, 00:06:05.961 "uuid": "00ec6d62-404a-50f5-96b9-f7eecc0f773f", 00:06:05.961 "assigned_rate_limits": { 00:06:05.961 "rw_ios_per_sec": 0, 00:06:05.961 "rw_mbytes_per_sec": 0, 00:06:05.961 "r_mbytes_per_sec": 0, 00:06:05.961 "w_mbytes_per_sec": 0 00:06:05.961 }, 00:06:05.961 "claimed": false, 00:06:05.961 "zoned": false, 00:06:05.961 "supported_io_types": { 00:06:05.961 "read": true, 00:06:05.961 "write": true, 00:06:05.961 "unmap": true, 00:06:05.961 "write_zeroes": true, 00:06:05.961 "flush": true, 00:06:05.961 "reset": true, 00:06:05.961 "compare": false, 00:06:05.961 "compare_and_write": false, 00:06:05.961 "abort": true, 00:06:05.961 "nvme_admin": false, 00:06:05.961 "nvme_io": false 00:06:05.961 }, 00:06:05.961 "memory_domains": [ 00:06:05.961 { 00:06:05.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.961 "dma_device_type": 2 00:06:05.961 } 00:06:05.961 ], 00:06:05.961 "driver_specific": { 00:06:05.961 "passthru": { 00:06:05.961 "name": "Passthru0", 00:06:05.961 "base_bdev_name": "Malloc2" 00:06:05.961 } 00:06:05.961 } 00:06:05.961 } 00:06:05.961 ]' 00:06:05.961 20:32:49 -- rpc/rpc.sh@21 -- # jq length 00:06:05.961 20:32:49 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.961 20:32:49 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.961 20:32:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.961 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.961 20:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.961 20:32:49 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:05.961 20:32:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.961 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.961 20:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.961 20:32:49 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.961 20:32:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:05.961 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.961 20:32:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.961 20:32:49 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.961 20:32:49 -- rpc/rpc.sh@26 -- # jq length 00:06:05.961 ************************************ 00:06:05.961 END TEST rpc_daemon_integrity 00:06:05.961 ************************************ 00:06:05.961 20:32:49 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.961 00:06:05.961 real 0m0.340s 00:06:05.961 user 0m0.205s 00:06:05.961 sys 0m0.049s 00:06:05.961 20:32:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.961 20:32:49 -- common/autotest_common.sh@10 -- # set +x 00:06:05.961 20:32:49 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:05.961 20:32:49 -- rpc/rpc.sh@84 -- # killprocess 39223 00:06:05.961 20:32:49 -- common/autotest_common.sh@926 -- # '[' -z 39223 ']' 00:06:05.961 20:32:49 -- common/autotest_common.sh@930 -- # kill -0 39223 00:06:05.961 20:32:49 -- common/autotest_common.sh@931 -- # uname 00:06:05.961 20:32:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.961 20:32:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 39223 00:06:05.961 20:32:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:05.962 20:32:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:05.962 20:32:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 39223' 00:06:05.962 killing process with pid 39223 00:06:05.962 20:32:49 -- common/autotest_common.sh@945 -- # kill 39223 00:06:05.962 20:32:49 -- common/autotest_common.sh@950 -- # wait 39223 00:06:08.529 00:06:08.529 real 0m5.227s 00:06:08.529 user 0m5.912s 00:06:08.529 sys 0m0.818s 00:06:08.529 20:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.529 ************************************ 00:06:08.529 END TEST rpc 00:06:08.529 ************************************ 00:06:08.529 20:32:51 -- common/autotest_common.sh@10 -- # set +x 00:06:08.529 20:32:51 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:08.529 20:32:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.529 20:32:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.529 20:32:51 -- common/autotest_common.sh@10 -- # set +x 00:06:08.529 ************************************ 00:06:08.529 START TEST rpc_client 00:06:08.529 ************************************ 00:06:08.529 20:32:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:08.529 * Looking for test storage... 00:06:08.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:08.529 20:32:51 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:08.529 OK 00:06:08.529 20:32:51 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:08.529 ************************************ 00:06:08.529 END TEST rpc_client 00:06:08.529 ************************************ 00:06:08.529 00:06:08.529 real 0m0.273s 00:06:08.529 user 0m0.088s 00:06:08.529 sys 0m0.104s 00:06:08.529 20:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.529 20:32:51 -- common/autotest_common.sh@10 -- # set +x 00:06:08.529 20:32:52 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:08.529 20:32:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.529 20:32:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.529 20:32:52 -- common/autotest_common.sh@10 -- # set +x 00:06:08.529 ************************************ 00:06:08.529 START TEST json_config 00:06:08.529 ************************************ 00:06:08.529 20:32:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:08.788 20:32:52 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:08.788 20:32:52 -- nvmf/common.sh@7 -- # uname -s 00:06:08.788 20:32:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.788 20:32:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.788 20:32:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.788 20:32:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.788 20:32:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.788 20:32:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.788 20:32:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.788 20:32:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.788 20:32:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.788 20:32:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.788 20:32:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a8a8578d-3d29-44a6-b3d2-648721cccadb 00:06:08.788 20:32:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=a8a8578d-3d29-44a6-b3d2-648721cccadb 00:06:08.788 20:32:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.788 20:32:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.788 20:32:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.788 20:32:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.788 20:32:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.788 20:32:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.788 20:32:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.788 20:32:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:08.788 20:32:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:08.788 20:32:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:08.788 20:32:52 -- paths/export.sh@5 -- # export PATH 00:06:08.789 20:32:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:08.789 20:32:52 -- nvmf/common.sh@46 -- # : 0 00:06:08.789 20:32:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:08.789 20:32:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:08.789 20:32:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:08.789 20:32:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.789 20:32:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.789 20:32:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:08.789 20:32:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:08.789 20:32:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:08.789 20:32:52 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:08.789 20:32:52 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:08.789 20:32:52 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:08.789 20:32:52 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:08.789 20:32:52 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:06:08.789 20:32:52 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:08.789 20:32:52 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:06:08.789 20:32:52 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:08.789 20:32:52 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:06:08.789 20:32:52 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:08.789 20:32:52 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:06:08.789 20:32:52 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:08.789 20:32:52 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:08.789 INFO: JSON configuration test init 00:06:08.789 20:32:52 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.789 20:32:52 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:08.789 20:32:52 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:08.789 20:32:52 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:08.789 20:32:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:08.789 20:32:52 -- common/autotest_common.sh@10 -- # set +x 00:06:08.789 20:32:52 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:08.789 20:32:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:08.789 20:32:52 -- common/autotest_common.sh@10 -- # set +x 00:06:08.789 20:32:52 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:08.789 20:32:52 -- json_config/json_config.sh@98 -- # local app=target 00:06:08.789 20:32:52 -- json_config/json_config.sh@99 -- # shift 00:06:08.789 20:32:52 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:08.789 20:32:52 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:08.789 20:32:52 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:08.789 20:32:52 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:08.789 20:32:52 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:08.789 Waiting for target to run... 00:06:08.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.789 20:32:52 -- json_config/json_config.sh@111 -- # app_pid[$app]=39543 00:06:08.789 20:32:52 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:08.789 20:32:52 -- json_config/json_config.sh@114 -- # waitforlisten 39543 /var/tmp/spdk_tgt.sock 00:06:08.789 20:32:52 -- common/autotest_common.sh@819 -- # '[' -z 39543 ']' 00:06:08.789 20:32:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.789 20:32:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.789 20:32:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.789 20:32:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.789 20:32:52 -- common/autotest_common.sh@10 -- # set +x 00:06:08.789 20:32:52 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:08.789 [2024-04-15 20:32:52.265726] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:08.789 [2024-04-15 20:32:52.265881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39543 ] 00:06:09.356 [2024-04-15 20:32:52.638307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.356 [2024-04-15 20:32:52.792216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.356 [2024-04-15 20:32:52.792413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.615 00:06:09.615 20:32:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.615 20:32:52 -- common/autotest_common.sh@852 -- # return 0 00:06:09.615 20:32:52 -- json_config/json_config.sh@115 -- # echo '' 00:06:09.615 20:32:52 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:09.615 20:32:52 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:09.615 20:32:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:09.615 20:32:52 -- common/autotest_common.sh@10 -- # set +x 00:06:09.615 20:32:52 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:09.615 20:32:52 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:09.615 20:32:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:09.615 20:32:52 -- common/autotest_common.sh@10 -- # set +x 00:06:09.615 20:32:53 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:09.616 20:32:53 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:09.616 20:32:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:10.553 20:32:53 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:10.553 20:32:53 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:10.553 20:32:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:10.553 20:32:53 -- common/autotest_common.sh@10 -- # set +x 00:06:10.553 20:32:53 -- json_config/json_config.sh@48 -- # local ret=0 00:06:10.553 20:32:53 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:06:10.553 20:32:53 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:10.553 20:32:53 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:06:10.553 20:32:53 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:10.553 20:32:53 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:10.553 20:32:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:10.553 20:32:53 -- json_config/json_config.sh@51 -- # local get_types 00:06:10.553 20:32:53 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:10.553 20:32:53 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:10.553 20:32:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:10.553 20:32:53 -- common/autotest_common.sh@10 -- # set +x 00:06:10.553 20:32:53 -- json_config/json_config.sh@58 -- # return 0 00:06:10.553 20:32:53 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:06:10.553 20:32:53 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:06:10.553 20:32:53 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:06:10.553 20:32:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:10.553 20:32:53 -- common/autotest_common.sh@10 -- # set +x 00:06:10.553 20:32:54 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:06:10.553 20:32:54 -- json_config/json_config.sh@160 -- # local expected_notifications 00:06:10.553 20:32:54 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:06:10.553 20:32:54 -- json_config/json_config.sh@164 -- # get_notifications 00:06:10.553 20:32:54 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:06:10.553 20:32:54 -- json_config/json_config.sh@64 -- # IFS=: 00:06:10.553 20:32:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:10.553 20:32:54 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:06:10.553 20:32:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:10.553 20:32:54 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:10.835 20:32:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:06:10.835 20:32:54 -- json_config/json_config.sh@64 -- # IFS=: 00:06:10.835 20:32:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:10.835 20:32:54 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:06:10.835 20:32:54 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:06:10.835 20:32:54 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:06:10.835 20:32:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:06:11.095 Nvme0n1p0 Nvme0n1p1 00:06:11.095 20:32:54 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:06:11.095 20:32:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:06:11.095 [2024-04-15 20:32:54.504047] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:11.095 [2024-04-15 20:32:54.504141] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:11.095 00:06:11.095 20:32:54 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:06:11.095 20:32:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:06:11.354 Malloc3 00:06:11.354 20:32:54 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:11.354 20:32:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:06:11.354 [2024-04-15 20:32:54.830164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:11.354 [2024-04-15 20:32:54.830257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:11.354 [2024-04-15 20:32:54.830296] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000036080 00:06:11.354 [2024-04-15 20:32:54.830325] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:11.354 [2024-04-15 20:32:54.831894] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:11.354 [2024-04-15 20:32:54.831948] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:11.354 PTBdevFromMalloc3 00:06:11.354 20:32:54 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:06:11.354 20:32:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:06:11.613 Null0 00:06:11.614 20:32:55 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:06:11.614 20:32:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:06:11.872 Malloc0 00:06:11.872 20:32:55 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:06:11.872 20:32:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:06:11.872 Malloc1 00:06:11.873 20:32:55 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:06:11.873 20:32:55 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:06:12.441 102400+0 records in 00:06:12.441 102400+0 records out 00:06:12.441 104857600 bytes (105 MB) copied, 0.320398 s, 327 MB/s 00:06:12.441 20:32:55 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:06:12.441 20:32:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:06:12.441 aio_disk 00:06:12.441 20:32:55 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:06:12.441 20:32:55 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:12.441 20:32:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:06:12.700 8576ab1d-83b4-4fe9-8f51-0abcf8fc1f6e 00:06:12.700 20:32:55 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:06:12.700 20:32:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:06:12.700 20:32:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:06:12.700 20:32:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:06:12.700 20:32:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:06:12.959 20:32:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:12.959 20:32:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:06:13.217 20:32:56 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:13.217 20:32:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:06:13.217 20:32:56 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:06:13.217 20:32:56 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:06:13.217 20:32:56 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:226663dc-aad2-45da-9e65-143d0b9bd190 bdev_register:4865adfc-db60-4e05-9c1b-5bc1061f82df bdev_register:756e6a20-62fc-4599-88c5-4e1ab26432e0 bdev_register:2e7a7f90-da8a-435e-8412-f8edb0674e4c 00:06:13.217 20:32:56 -- json_config/json_config.sh@70 -- # local events_to_check 00:06:13.217 20:32:56 -- json_config/json_config.sh@71 -- # local recorded_events 00:06:13.217 20:32:56 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:06:13.217 20:32:56 -- json_config/json_config.sh@74 -- # sort 00:06:13.217 20:32:56 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:226663dc-aad2-45da-9e65-143d0b9bd190 bdev_register:4865adfc-db60-4e05-9c1b-5bc1061f82df bdev_register:756e6a20-62fc-4599-88c5-4e1ab26432e0 bdev_register:2e7a7f90-da8a-435e-8412-f8edb0674e4c 00:06:13.217 20:32:56 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:06:13.217 20:32:56 -- json_config/json_config.sh@75 -- # sort 00:06:13.217 20:32:56 -- json_config/json_config.sh@75 -- # get_notifications 00:06:13.217 20:32:56 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:06:13.217 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.217 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.217 20:32:56 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:06:13.217 20:32:56 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:06:13.217 20:32:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:226663dc-aad2-45da-9e65-143d0b9bd190 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:4865adfc-db60-4e05-9c1b-5bc1061f82df 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:756e6a20-62fc-4599-88c5-4e1ab26432e0 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@65 -- # echo bdev_register:2e7a7f90-da8a-435e-8412-f8edb0674e4c 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # IFS=: 00:06:13.477 20:32:56 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:06:13.477 20:32:56 -- json_config/json_config.sh@77 -- # [[ bdev_register:226663dc-aad2-45da-9e65-143d0b9bd190 bdev_register:2e7a7f90-da8a-435e-8412-f8edb0674e4c bdev_register:4865adfc-db60-4e05-9c1b-5bc1061f82df bdev_register:756e6a20-62fc-4599-88c5-4e1ab26432e0 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\2\6\6\6\3\d\c\-\a\a\d\2\-\4\5\d\a\-\9\e\6\5\-\1\4\3\d\0\b\9\b\d\1\9\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\e\7\a\7\f\9\0\-\d\a\8\a\-\4\3\5\e\-\8\4\1\2\-\f\8\e\d\b\0\6\7\4\e\4\c\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\8\6\5\a\d\f\c\-\d\b\6\0\-\4\e\0\5\-\9\c\1\b\-\5\b\c\1\0\6\1\f\8\2\d\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\5\6\e\6\a\2\0\-\6\2\f\c\-\4\5\9\9\-\8\8\c\5\-\4\e\1\a\b\2\6\4\3\2\e\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:06:13.477 20:32:56 -- json_config/json_config.sh@89 -- # cat 00:06:13.477 20:32:56 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:226663dc-aad2-45da-9e65-143d0b9bd190 bdev_register:2e7a7f90-da8a-435e-8412-f8edb0674e4c bdev_register:4865adfc-db60-4e05-9c1b-5bc1061f82df bdev_register:756e6a20-62fc-4599-88c5-4e1ab26432e0 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:06:13.477 Expected events matched: 00:06:13.477 bdev_register:226663dc-aad2-45da-9e65-143d0b9bd190 00:06:13.477 bdev_register:2e7a7f90-da8a-435e-8412-f8edb0674e4c 00:06:13.477 bdev_register:4865adfc-db60-4e05-9c1b-5bc1061f82df 00:06:13.477 bdev_register:756e6a20-62fc-4599-88c5-4e1ab26432e0 00:06:13.477 bdev_register:Malloc0 00:06:13.477 bdev_register:Malloc0p0 00:06:13.477 bdev_register:Malloc0p1 00:06:13.477 bdev_register:Malloc0p2 00:06:13.477 bdev_register:Malloc1 00:06:13.477 bdev_register:Malloc3 00:06:13.477 bdev_register:Null0 00:06:13.477 bdev_register:Nvme0n1 00:06:13.477 bdev_register:Nvme0n1p0 00:06:13.477 bdev_register:Nvme0n1p1 00:06:13.477 bdev_register:PTBdevFromMalloc3 00:06:13.477 bdev_register:aio_disk 00:06:13.477 20:32:56 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:06:13.477 20:32:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:13.477 20:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:13.477 20:32:56 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:13.477 20:32:56 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:13.477 20:32:56 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:06:13.477 20:32:56 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:13.477 20:32:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:13.477 20:32:56 -- common/autotest_common.sh@10 -- # set +x 00:06:13.477 20:32:56 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:13.477 20:32:56 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:13.477 20:32:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:13.736 MallocBdevForConfigChangeCheck 00:06:13.736 20:32:57 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:13.736 20:32:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:13.736 20:32:57 -- common/autotest_common.sh@10 -- # set +x 00:06:13.736 20:32:57 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:13.736 20:32:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.995 INFO: shutting down applications... 00:06:13.995 20:32:57 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:13.995 20:32:57 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:13.995 20:32:57 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:13.995 20:32:57 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:13.995 20:32:57 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:14.289 [2024-04-15 20:32:57.614632] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:06:14.548 Calling clear_vhost_scsi_subsystem 00:06:14.548 Calling clear_iscsi_subsystem 00:06:14.548 Calling clear_vhost_blk_subsystem 00:06:14.548 Calling clear_nbd_subsystem 00:06:14.548 Calling clear_nvmf_subsystem 00:06:14.548 Calling clear_bdev_subsystem 00:06:14.548 Calling clear_accel_subsystem 00:06:14.548 Calling clear_iobuf_subsystem 00:06:14.548 Calling clear_sock_subsystem 00:06:14.548 Calling clear_vmd_subsystem 00:06:14.548 Calling clear_scheduler_subsystem 00:06:14.548 20:32:57 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:14.548 20:32:57 -- json_config/json_config.sh@396 -- # count=100 00:06:14.548 20:32:57 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:14.548 20:32:57 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.548 20:32:57 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:14.548 20:32:57 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:14.806 20:32:58 -- json_config/json_config.sh@398 -- # break 00:06:14.806 20:32:58 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:14.806 20:32:58 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:14.806 20:32:58 -- json_config/json_config.sh@120 -- # local app=target 00:06:14.806 20:32:58 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:14.806 20:32:58 -- json_config/json_config.sh@124 -- # [[ -n 39543 ]] 00:06:14.806 20:32:58 -- json_config/json_config.sh@127 -- # kill -SIGINT 39543 00:06:14.806 20:32:58 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:14.806 20:32:58 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:14.806 20:32:58 -- json_config/json_config.sh@130 -- # kill -0 39543 00:06:14.806 20:32:58 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:15.375 20:32:58 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:15.375 20:32:58 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:15.375 20:32:58 -- json_config/json_config.sh@130 -- # kill -0 39543 00:06:15.375 20:32:58 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:15.634 20:32:59 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:15.634 20:32:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:15.635 20:32:59 -- json_config/json_config.sh@130 -- # kill -0 39543 00:06:15.635 20:32:59 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:15.635 20:32:59 -- json_config/json_config.sh@132 -- # break 00:06:15.635 SPDK target shutdown done 00:06:15.635 20:32:59 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:15.635 20:32:59 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:15.635 INFO: relaunching applications... 00:06:15.635 Waiting for target to run... 00:06:15.635 20:32:59 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:15.635 20:32:59 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:15.635 20:32:59 -- json_config/json_config.sh@98 -- # local app=target 00:06:15.635 20:32:59 -- json_config/json_config.sh@99 -- # shift 00:06:15.635 20:32:59 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:15.635 20:32:59 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:15.635 20:32:59 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:15.635 20:32:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:15.635 20:32:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:15.635 20:32:59 -- json_config/json_config.sh@111 -- # app_pid[$app]=39794 00:06:15.635 20:32:59 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:15.635 20:32:59 -- json_config/json_config.sh@114 -- # waitforlisten 39794 /var/tmp/spdk_tgt.sock 00:06:15.635 20:32:59 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:15.635 20:32:59 -- common/autotest_common.sh@819 -- # '[' -z 39794 ']' 00:06:15.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:15.635 20:32:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:15.635 20:32:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.635 20:32:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:15.635 20:32:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.635 20:32:59 -- common/autotest_common.sh@10 -- # set +x 00:06:15.894 [2024-04-15 20:32:59.246711] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:15.894 [2024-04-15 20:32:59.246869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39794 ] 00:06:16.153 [2024-04-15 20:32:59.619571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.412 [2024-04-15 20:32:59.768912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:16.412 [2024-04-15 20:32:59.769092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.981 [2024-04-15 20:33:00.320749] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:16.981 [2024-04-15 20:33:00.320843] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:06:16.981 [2024-04-15 20:33:00.328712] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:16.981 [2024-04-15 20:33:00.328756] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:06:16.981 [2024-04-15 20:33:00.336731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:16.981 [2024-04-15 20:33:00.336774] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:16.981 [2024-04-15 20:33:00.336797] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:16.981 [2024-04-15 20:33:00.421178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:16.981 [2024-04-15 20:33:00.421243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:16.981 [2024-04-15 20:33:00.421277] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038780 00:06:16.981 [2024-04-15 20:33:00.421298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:16.981 [2024-04-15 20:33:00.421571] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:16.981 [2024-04-15 20:33:00.421598] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:06:17.240 00:06:17.240 INFO: Checking if target configuration is the same... 00:06:17.240 20:33:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.240 20:33:00 -- common/autotest_common.sh@852 -- # return 0 00:06:17.240 20:33:00 -- json_config/json_config.sh@115 -- # echo '' 00:06:17.240 20:33:00 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:17.240 20:33:00 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:17.240 20:33:00 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.240 20:33:00 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:17.240 20:33:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.240 + '[' 2 -ne 2 ']' 00:06:17.240 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:17.240 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:17.240 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:17.240 +++ basename /dev/fd/62 00:06:17.240 ++ mktemp /tmp/62.XXX 00:06:17.240 + tmp_file_1=/tmp/62.FcI 00:06:17.240 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.240 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.240 + tmp_file_2=/tmp/spdk_tgt_config.json.yPE 00:06:17.240 + ret=0 00:06:17.240 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:17.499 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:17.499 + diff -u /tmp/62.FcI /tmp/spdk_tgt_config.json.yPE 00:06:17.499 INFO: JSON config files are the same 00:06:17.499 + echo 'INFO: JSON config files are the same' 00:06:17.499 + rm /tmp/62.FcI /tmp/spdk_tgt_config.json.yPE 00:06:17.499 + exit 0 00:06:17.499 INFO: changing configuration and checking if this can be detected... 00:06:17.499 20:33:00 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:17.499 20:33:00 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:17.499 20:33:00 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:17.499 20:33:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:17.758 20:33:01 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.758 20:33:01 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:17.758 20:33:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.758 + '[' 2 -ne 2 ']' 00:06:17.758 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:17.758 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:17.758 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:17.758 +++ basename /dev/fd/62 00:06:17.758 ++ mktemp /tmp/62.XXX 00:06:17.758 + tmp_file_1=/tmp/62.VjF 00:06:17.758 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.758 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.758 + tmp_file_2=/tmp/spdk_tgt_config.json.0wY 00:06:17.758 + ret=0 00:06:17.758 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:18.017 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:18.017 + diff -u /tmp/62.VjF /tmp/spdk_tgt_config.json.0wY 00:06:18.017 + ret=1 00:06:18.017 + echo '=== Start of file: /tmp/62.VjF ===' 00:06:18.017 + cat /tmp/62.VjF 00:06:18.017 + echo '=== End of file: /tmp/62.VjF ===' 00:06:18.017 + echo '' 00:06:18.017 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0wY ===' 00:06:18.017 + cat /tmp/spdk_tgt_config.json.0wY 00:06:18.017 + echo '=== End of file: /tmp/spdk_tgt_config.json.0wY ===' 00:06:18.017 + echo '' 00:06:18.017 + rm /tmp/62.VjF /tmp/spdk_tgt_config.json.0wY 00:06:18.017 + exit 1 00:06:18.017 INFO: configuration change detected. 00:06:18.017 20:33:01 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:18.017 20:33:01 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:18.017 20:33:01 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:18.017 20:33:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:18.017 20:33:01 -- common/autotest_common.sh@10 -- # set +x 00:06:18.017 20:33:01 -- json_config/json_config.sh@360 -- # local ret=0 00:06:18.017 20:33:01 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:18.017 20:33:01 -- json_config/json_config.sh@370 -- # [[ -n 39794 ]] 00:06:18.017 20:33:01 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:18.017 20:33:01 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:18.017 20:33:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:18.017 20:33:01 -- common/autotest_common.sh@10 -- # set +x 00:06:18.017 20:33:01 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:06:18.017 20:33:01 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:06:18.017 20:33:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:06:18.276 20:33:01 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:06:18.276 20:33:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:06:18.535 20:33:01 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:06:18.535 20:33:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:06:18.535 20:33:01 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:06:18.535 20:33:01 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:06:18.794 20:33:02 -- json_config/json_config.sh@246 -- # uname -s 00:06:18.794 20:33:02 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:18.794 20:33:02 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:18.794 20:33:02 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:18.794 20:33:02 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:18.794 20:33:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:18.794 20:33:02 -- common/autotest_common.sh@10 -- # set +x 00:06:18.794 20:33:02 -- json_config/json_config.sh@376 -- # killprocess 39794 00:06:18.794 20:33:02 -- common/autotest_common.sh@926 -- # '[' -z 39794 ']' 00:06:18.794 20:33:02 -- common/autotest_common.sh@930 -- # kill -0 39794 00:06:18.794 20:33:02 -- common/autotest_common.sh@931 -- # uname 00:06:18.794 20:33:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:18.794 20:33:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 39794 00:06:18.794 killing process with pid 39794 00:06:18.794 20:33:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:18.794 20:33:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:18.794 20:33:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 39794' 00:06:18.794 20:33:02 -- common/autotest_common.sh@945 -- # kill 39794 00:06:18.794 20:33:02 -- common/autotest_common.sh@950 -- # wait 39794 00:06:19.732 20:33:03 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:19.732 20:33:03 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:19.732 20:33:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:19.732 20:33:03 -- common/autotest_common.sh@10 -- # set +x 00:06:19.992 INFO: Success 00:06:19.992 20:33:03 -- json_config/json_config.sh@381 -- # return 0 00:06:19.992 20:33:03 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:19.992 ************************************ 00:06:19.992 END TEST json_config 00:06:19.992 ************************************ 00:06:19.992 00:06:19.992 real 0m11.222s 00:06:19.992 user 0m15.132s 00:06:19.992 sys 0m2.203s 00:06:19.992 20:33:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.992 20:33:03 -- common/autotest_common.sh@10 -- # set +x 00:06:19.992 20:33:03 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:19.992 20:33:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:19.992 20:33:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.992 20:33:03 -- common/autotest_common.sh@10 -- # set +x 00:06:19.992 ************************************ 00:06:19.992 START TEST json_config_extra_key 00:06:19.992 ************************************ 00:06:19.992 20:33:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:19.992 20:33:03 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:19.992 20:33:03 -- nvmf/common.sh@7 -- # uname -s 00:06:19.992 20:33:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.992 20:33:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.992 20:33:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.992 20:33:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.992 20:33:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.992 20:33:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.992 20:33:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.992 20:33:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.992 20:33:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.992 20:33:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.992 20:33:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d6728f01-8c29-47d9-a209-e11d9b042d3b 00:06:19.992 20:33:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=d6728f01-8c29-47d9-a209-e11d9b042d3b 00:06:19.992 20:33:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.992 20:33:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.992 20:33:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:19.992 20:33:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:19.992 20:33:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.992 20:33:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.992 20:33:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.992 20:33:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:19.992 20:33:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:19.993 20:33:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:19.993 20:33:03 -- paths/export.sh@5 -- # export PATH 00:06:19.993 20:33:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:06:19.993 20:33:03 -- nvmf/common.sh@46 -- # : 0 00:06:19.993 20:33:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:19.993 20:33:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:19.993 20:33:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:19.993 20:33:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.993 20:33:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.993 20:33:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:19.993 20:33:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:19.993 20:33:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:06:19.993 INFO: launching applications... 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:19.993 Waiting for target to run... 00:06:19.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=39979 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 39979 /var/tmp/spdk_tgt.sock 00:06:19.993 20:33:03 -- common/autotest_common.sh@819 -- # '[' -z 39979 ']' 00:06:19.993 20:33:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:19.993 20:33:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.993 20:33:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:19.993 20:33:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.993 20:33:03 -- common/autotest_common.sh@10 -- # set +x 00:06:19.993 20:33:03 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:20.253 [2024-04-15 20:33:03.574258] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:20.253 [2024-04-15 20:33:03.574418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39979 ] 00:06:20.512 [2024-04-15 20:33:03.974416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.772 [2024-04-15 20:33:04.128639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.772 [2024-04-15 20:33:04.128831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.764 00:06:21.764 INFO: shutting down applications... 00:06:21.764 20:33:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.764 20:33:04 -- common/autotest_common.sh@852 -- # return 0 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 39979 ]] 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 39979 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39979 00:06:21.764 20:33:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:22.023 20:33:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:22.023 20:33:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:22.023 20:33:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39979 00:06:22.023 20:33:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:22.591 20:33:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:22.591 20:33:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:22.591 20:33:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39979 00:06:22.592 20:33:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:23.165 20:33:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:23.165 20:33:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:23.165 20:33:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39979 00:06:23.165 20:33:06 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:23.431 20:33:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:23.431 20:33:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:23.431 20:33:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39979 00:06:23.431 20:33:06 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:24.017 20:33:07 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:24.017 20:33:07 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:24.017 20:33:07 -- json_config/json_config_extra_key.sh@50 -- # kill -0 39979 00:06:24.017 SPDK target shutdown done 00:06:24.017 Success 00:06:24.017 20:33:07 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:24.017 20:33:07 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:24.017 20:33:07 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:24.017 20:33:07 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:24.017 20:33:07 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:24.017 ************************************ 00:06:24.017 END TEST json_config_extra_key 00:06:24.017 ************************************ 00:06:24.017 00:06:24.017 real 0m4.110s 00:06:24.017 user 0m3.758s 00:06:24.017 sys 0m0.512s 00:06:24.017 20:33:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.017 20:33:07 -- common/autotest_common.sh@10 -- # set +x 00:06:24.017 20:33:07 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:24.017 20:33:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.017 20:33:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.017 20:33:07 -- common/autotest_common.sh@10 -- # set +x 00:06:24.017 ************************************ 00:06:24.017 START TEST alias_rpc 00:06:24.017 ************************************ 00:06:24.017 20:33:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:24.288 * Looking for test storage... 00:06:24.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:24.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.288 20:33:07 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:24.289 20:33:07 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=40099 00:06:24.289 20:33:07 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 40099 00:06:24.289 20:33:07 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.289 20:33:07 -- common/autotest_common.sh@819 -- # '[' -z 40099 ']' 00:06:24.289 20:33:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.289 20:33:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.289 20:33:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.289 20:33:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.289 20:33:07 -- common/autotest_common.sh@10 -- # set +x 00:06:24.289 [2024-04-15 20:33:07.759081] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:24.289 [2024-04-15 20:33:07.759233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40099 ] 00:06:24.562 [2024-04-15 20:33:07.928813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.825 [2024-04-15 20:33:08.098708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.825 [2024-04-15 20:33:08.098908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.763 20:33:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.763 20:33:09 -- common/autotest_common.sh@852 -- # return 0 00:06:25.763 20:33:09 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:26.021 20:33:09 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 40099 00:06:26.021 20:33:09 -- common/autotest_common.sh@926 -- # '[' -z 40099 ']' 00:06:26.021 20:33:09 -- common/autotest_common.sh@930 -- # kill -0 40099 00:06:26.021 20:33:09 -- common/autotest_common.sh@931 -- # uname 00:06:26.021 20:33:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.021 20:33:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40099 00:06:26.021 20:33:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.021 20:33:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.021 killing process with pid 40099 00:06:26.021 20:33:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40099' 00:06:26.021 20:33:09 -- common/autotest_common.sh@945 -- # kill 40099 00:06:26.021 20:33:09 -- common/autotest_common.sh@950 -- # wait 40099 00:06:27.989 ************************************ 00:06:27.989 END TEST alias_rpc 00:06:27.989 ************************************ 00:06:27.989 00:06:27.989 real 0m3.945s 00:06:27.989 user 0m3.883s 00:06:27.989 sys 0m0.536s 00:06:27.989 20:33:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.989 20:33:11 -- common/autotest_common.sh@10 -- # set +x 00:06:28.247 20:33:11 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:28.247 20:33:11 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:28.247 20:33:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.247 20:33:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.247 20:33:11 -- common/autotest_common.sh@10 -- # set +x 00:06:28.247 ************************************ 00:06:28.247 START TEST spdkcli_tcp 00:06:28.247 ************************************ 00:06:28.247 20:33:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:28.247 * Looking for test storage... 00:06:28.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:28.247 20:33:11 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:28.247 20:33:11 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:28.247 20:33:11 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:28.247 20:33:11 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:28.247 20:33:11 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:28.247 20:33:11 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:28.247 20:33:11 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:28.247 20:33:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:28.247 20:33:11 -- common/autotest_common.sh@10 -- # set +x 00:06:28.247 20:33:11 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=40220 00:06:28.247 20:33:11 -- spdkcli/tcp.sh@27 -- # waitforlisten 40220 00:06:28.247 20:33:11 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:28.247 20:33:11 -- common/autotest_common.sh@819 -- # '[' -z 40220 ']' 00:06:28.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.247 20:33:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.247 20:33:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.247 20:33:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.247 20:33:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.247 20:33:11 -- common/autotest_common.sh@10 -- # set +x 00:06:28.506 [2024-04-15 20:33:11.779712] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:28.506 [2024-04-15 20:33:11.779861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40220 ] 00:06:28.506 [2024-04-15 20:33:11.934095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.764 [2024-04-15 20:33:12.101847] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.764 [2024-04-15 20:33:12.102238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.764 [2024-04-15 20:33:12.102689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.700 20:33:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.700 20:33:13 -- common/autotest_common.sh@852 -- # return 0 00:06:29.700 20:33:13 -- spdkcli/tcp.sh@31 -- # socat_pid=40243 00:06:29.700 20:33:13 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:29.700 20:33:13 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:30.015 [ 00:06:30.015 "spdk_get_version", 00:06:30.015 "rpc_get_methods", 00:06:30.015 "trace_get_info", 00:06:30.015 "trace_get_tpoint_group_mask", 00:06:30.015 "trace_disable_tpoint_group", 00:06:30.015 "trace_enable_tpoint_group", 00:06:30.015 "trace_clear_tpoint_mask", 00:06:30.015 "trace_set_tpoint_mask", 00:06:30.015 "framework_get_pci_devices", 00:06:30.015 "framework_get_config", 00:06:30.015 "framework_get_subsystems", 00:06:30.015 "iobuf_get_stats", 00:06:30.015 "iobuf_set_options", 00:06:30.015 "sock_set_default_impl", 00:06:30.015 "sock_impl_set_options", 00:06:30.015 "sock_impl_get_options", 00:06:30.015 "vmd_rescan", 00:06:30.015 "vmd_remove_device", 00:06:30.015 "vmd_enable", 00:06:30.015 "accel_get_stats", 00:06:30.015 "accel_set_options", 00:06:30.015 "accel_set_driver", 00:06:30.015 "accel_crypto_key_destroy", 00:06:30.015 "accel_crypto_keys_get", 00:06:30.015 "accel_crypto_key_create", 00:06:30.015 "accel_assign_opc", 00:06:30.015 "accel_get_module_info", 00:06:30.015 "accel_get_opc_assignments", 00:06:30.015 "notify_get_notifications", 00:06:30.015 "notify_get_types", 00:06:30.015 "bdev_get_histogram", 00:06:30.015 "bdev_enable_histogram", 00:06:30.015 "bdev_set_qos_limit", 00:06:30.015 "bdev_set_qd_sampling_period", 00:06:30.015 "bdev_get_bdevs", 00:06:30.015 "bdev_reset_iostat", 00:06:30.015 "bdev_get_iostat", 00:06:30.015 "bdev_examine", 00:06:30.015 "bdev_wait_for_examine", 00:06:30.015 "bdev_set_options", 00:06:30.015 "scsi_get_devices", 00:06:30.015 "thread_set_cpumask", 00:06:30.015 "framework_get_scheduler", 00:06:30.015 "framework_set_scheduler", 00:06:30.015 "framework_get_reactors", 00:06:30.015 "thread_get_io_channels", 00:06:30.015 "thread_get_pollers", 00:06:30.015 "thread_get_stats", 00:06:30.015 "framework_monitor_context_switch", 00:06:30.015 "spdk_kill_instance", 00:06:30.015 "log_enable_timestamps", 00:06:30.015 "log_get_flags", 00:06:30.015 "log_clear_flag", 00:06:30.015 "log_set_flag", 00:06:30.015 "log_get_level", 00:06:30.015 "log_set_level", 00:06:30.015 "log_get_print_level", 00:06:30.015 "log_set_print_level", 00:06:30.015 "framework_enable_cpumask_locks", 00:06:30.015 "framework_disable_cpumask_locks", 00:06:30.015 "framework_wait_init", 00:06:30.015 "framework_start_init", 00:06:30.015 "virtio_blk_create_transport", 00:06:30.015 "virtio_blk_get_transports", 00:06:30.015 "vhost_controller_set_coalescing", 00:06:30.015 "vhost_get_controllers", 00:06:30.015 "vhost_delete_controller", 00:06:30.015 "vhost_create_blk_controller", 00:06:30.015 "vhost_scsi_controller_remove_target", 00:06:30.015 "vhost_scsi_controller_add_target", 00:06:30.015 "vhost_start_scsi_controller", 00:06:30.015 "vhost_create_scsi_controller", 00:06:30.015 "nbd_get_disks", 00:06:30.015 "nbd_stop_disk", 00:06:30.015 "nbd_start_disk", 00:06:30.015 "env_dpdk_get_mem_stats", 00:06:30.015 "nvmf_subsystem_get_listeners", 00:06:30.015 "nvmf_subsystem_get_qpairs", 00:06:30.015 "nvmf_subsystem_get_controllers", 00:06:30.015 "nvmf_get_stats", 00:06:30.015 "nvmf_get_transports", 00:06:30.015 "nvmf_create_transport", 00:06:30.015 "nvmf_get_targets", 00:06:30.015 "nvmf_delete_target", 00:06:30.015 "nvmf_create_target", 00:06:30.015 "nvmf_subsystem_allow_any_host", 00:06:30.015 "nvmf_subsystem_remove_host", 00:06:30.015 "nvmf_subsystem_add_host", 00:06:30.015 "nvmf_subsystem_remove_ns", 00:06:30.015 "nvmf_subsystem_add_ns", 00:06:30.015 "nvmf_subsystem_listener_set_ana_state", 00:06:30.015 "nvmf_discovery_get_referrals", 00:06:30.015 "nvmf_discovery_remove_referral", 00:06:30.015 "nvmf_discovery_add_referral", 00:06:30.015 "nvmf_subsystem_remove_listener", 00:06:30.015 "nvmf_subsystem_add_listener", 00:06:30.015 "nvmf_delete_subsystem", 00:06:30.015 "nvmf_create_subsystem", 00:06:30.015 "nvmf_get_subsystems", 00:06:30.015 "nvmf_set_crdt", 00:06:30.015 "nvmf_set_config", 00:06:30.015 "nvmf_set_max_subsystems", 00:06:30.015 "iscsi_set_options", 00:06:30.015 "iscsi_get_auth_groups", 00:06:30.015 "iscsi_auth_group_remove_secret", 00:06:30.015 "iscsi_auth_group_add_secret", 00:06:30.015 "iscsi_delete_auth_group", 00:06:30.015 "iscsi_create_auth_group", 00:06:30.015 "iscsi_set_discovery_auth", 00:06:30.015 "iscsi_get_options", 00:06:30.015 "iscsi_target_node_request_logout", 00:06:30.015 "iscsi_target_node_set_redirect", 00:06:30.015 "iscsi_target_node_set_auth", 00:06:30.015 "iscsi_target_node_add_lun", 00:06:30.015 "iscsi_get_connections", 00:06:30.015 "iscsi_portal_group_set_auth", 00:06:30.015 "iscsi_start_portal_group", 00:06:30.015 "iscsi_delete_portal_group", 00:06:30.015 "iscsi_create_portal_group", 00:06:30.015 "iscsi_get_portal_groups", 00:06:30.015 "iscsi_delete_target_node", 00:06:30.015 "iscsi_target_node_remove_pg_ig_maps", 00:06:30.015 "iscsi_target_node_add_pg_ig_maps", 00:06:30.015 "iscsi_create_target_node", 00:06:30.015 "iscsi_get_target_nodes", 00:06:30.015 "iscsi_delete_initiator_group", 00:06:30.015 "iscsi_initiator_group_remove_initiators", 00:06:30.015 "iscsi_initiator_group_add_initiators", 00:06:30.015 "iscsi_create_initiator_group", 00:06:30.015 "iscsi_get_initiator_groups", 00:06:30.015 "iaa_scan_accel_module", 00:06:30.015 "dsa_scan_accel_module", 00:06:30.015 "ioat_scan_accel_module", 00:06:30.015 "accel_error_inject_error", 00:06:30.015 "bdev_daos_resize", 00:06:30.015 "bdev_daos_delete", 00:06:30.015 "bdev_daos_create", 00:06:30.015 "bdev_virtio_attach_controller", 00:06:30.015 "bdev_virtio_scsi_get_devices", 00:06:30.015 "bdev_virtio_detach_controller", 00:06:30.015 "bdev_virtio_blk_set_hotplug", 00:06:30.015 "bdev_ftl_set_property", 00:06:30.015 "bdev_ftl_get_properties", 00:06:30.015 "bdev_ftl_get_stats", 00:06:30.015 "bdev_ftl_unmap", 00:06:30.015 "bdev_ftl_unload", 00:06:30.015 "bdev_ftl_delete", 00:06:30.015 "bdev_ftl_load", 00:06:30.015 "bdev_ftl_create", 00:06:30.015 "bdev_aio_delete", 00:06:30.015 "bdev_aio_rescan", 00:06:30.015 "bdev_aio_create", 00:06:30.015 "blobfs_create", 00:06:30.015 "blobfs_detect", 00:06:30.015 "blobfs_set_cache_size", 00:06:30.015 "bdev_zone_block_delete", 00:06:30.015 "bdev_zone_block_create", 00:06:30.015 "bdev_delay_delete", 00:06:30.015 "bdev_delay_create", 00:06:30.015 "bdev_delay_update_latency", 00:06:30.015 "bdev_split_delete", 00:06:30.015 "bdev_split_create", 00:06:30.015 "bdev_error_inject_error", 00:06:30.015 "bdev_error_delete", 00:06:30.015 "bdev_error_create", 00:06:30.015 "bdev_raid_set_options", 00:06:30.015 "bdev_raid_remove_base_bdev", 00:06:30.015 "bdev_raid_add_base_bdev", 00:06:30.015 "bdev_raid_delete", 00:06:30.015 "bdev_raid_create", 00:06:30.015 "bdev_raid_get_bdevs", 00:06:30.015 "bdev_lvol_grow_lvstore", 00:06:30.015 "bdev_lvol_get_lvols", 00:06:30.015 "bdev_lvol_get_lvstores", 00:06:30.015 "bdev_lvol_delete", 00:06:30.015 "bdev_lvol_set_read_only", 00:06:30.015 "bdev_lvol_resize", 00:06:30.015 "bdev_lvol_decouple_parent", 00:06:30.015 "bdev_lvol_inflate", 00:06:30.015 "bdev_lvol_rename", 00:06:30.015 "bdev_lvol_clone_bdev", 00:06:30.015 "bdev_lvol_clone", 00:06:30.015 "bdev_lvol_snapshot", 00:06:30.015 "bdev_lvol_create", 00:06:30.015 "bdev_lvol_delete_lvstore", 00:06:30.015 "bdev_lvol_rename_lvstore", 00:06:30.015 "bdev_lvol_create_lvstore", 00:06:30.016 "bdev_passthru_delete", 00:06:30.016 "bdev_passthru_create", 00:06:30.016 "bdev_nvme_cuse_unregister", 00:06:30.016 "bdev_nvme_cuse_register", 00:06:30.016 "bdev_opal_new_user", 00:06:30.016 "bdev_opal_set_lock_state", 00:06:30.016 "bdev_opal_delete", 00:06:30.016 "bdev_opal_get_info", 00:06:30.016 "bdev_opal_create", 00:06:30.016 "bdev_nvme_opal_revert", 00:06:30.016 "bdev_nvme_opal_init", 00:06:30.016 "bdev_nvme_send_cmd", 00:06:30.016 "bdev_nvme_get_path_iostat", 00:06:30.016 "bdev_nvme_get_mdns_discovery_info", 00:06:30.016 "bdev_nvme_stop_mdns_discovery", 00:06:30.016 "bdev_nvme_start_mdns_discovery", 00:06:30.016 "bdev_nvme_set_multipath_policy", 00:06:30.016 "bdev_nvme_set_preferred_path", 00:06:30.016 "bdev_nvme_get_io_paths", 00:06:30.016 "bdev_nvme_remove_error_injection", 00:06:30.016 "bdev_nvme_add_error_injection", 00:06:30.016 "bdev_nvme_get_discovery_info", 00:06:30.016 "bdev_nvme_stop_discovery", 00:06:30.016 "bdev_nvme_start_discovery", 00:06:30.016 "bdev_nvme_get_controller_health_info", 00:06:30.016 "bdev_nvme_disable_controller", 00:06:30.016 "bdev_nvme_enable_controller", 00:06:30.016 "bdev_nvme_reset_controller", 00:06:30.016 "bdev_nvme_get_transport_statistics", 00:06:30.016 "bdev_nvme_apply_firmware", 00:06:30.016 "bdev_nvme_detach_controller", 00:06:30.016 "bdev_nvme_get_controllers", 00:06:30.016 "bdev_nvme_attach_controller", 00:06:30.016 "bdev_nvme_set_hotplug", 00:06:30.016 "bdev_nvme_set_options", 00:06:30.016 "bdev_null_resize", 00:06:30.016 "bdev_null_delete", 00:06:30.016 "bdev_null_create", 00:06:30.016 "bdev_malloc_delete", 00:06:30.016 "bdev_malloc_create" 00:06:30.016 ] 00:06:30.016 20:33:13 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:30.016 20:33:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:30.016 20:33:13 -- common/autotest_common.sh@10 -- # set +x 00:06:30.016 20:33:13 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:30.016 20:33:13 -- spdkcli/tcp.sh@38 -- # killprocess 40220 00:06:30.016 20:33:13 -- common/autotest_common.sh@926 -- # '[' -z 40220 ']' 00:06:30.016 20:33:13 -- common/autotest_common.sh@930 -- # kill -0 40220 00:06:30.016 20:33:13 -- common/autotest_common.sh@931 -- # uname 00:06:30.016 20:33:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:30.016 20:33:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40220 00:06:30.016 killing process with pid 40220 00:06:30.016 20:33:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:30.016 20:33:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:30.016 20:33:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40220' 00:06:30.016 20:33:13 -- common/autotest_common.sh@945 -- # kill 40220 00:06:30.016 20:33:13 -- common/autotest_common.sh@950 -- # wait 40220 00:06:32.565 00:06:32.565 real 0m3.999s 00:06:32.565 user 0m7.017s 00:06:32.565 sys 0m0.574s 00:06:32.565 20:33:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.565 ************************************ 00:06:32.565 END TEST spdkcli_tcp 00:06:32.565 ************************************ 00:06:32.565 20:33:15 -- common/autotest_common.sh@10 -- # set +x 00:06:32.565 20:33:15 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.565 20:33:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.565 20:33:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.565 20:33:15 -- common/autotest_common.sh@10 -- # set +x 00:06:32.565 ************************************ 00:06:32.565 START TEST dpdk_mem_utility 00:06:32.565 ************************************ 00:06:32.565 20:33:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.565 * Looking for test storage... 00:06:32.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:32.565 20:33:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:32.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.565 20:33:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=40352 00:06:32.565 20:33:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 40352 00:06:32.565 20:33:15 -- common/autotest_common.sh@819 -- # '[' -z 40352 ']' 00:06:32.565 20:33:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.565 20:33:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.565 20:33:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.565 20:33:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.565 20:33:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.565 20:33:15 -- common/autotest_common.sh@10 -- # set +x 00:06:32.565 [2024-04-15 20:33:15.835624] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:32.565 [2024-04-15 20:33:15.835803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40352 ] 00:06:32.565 [2024-04-15 20:33:16.000973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.824 [2024-04-15 20:33:16.171817] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.824 [2024-04-15 20:33:16.172006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.762 20:33:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.762 20:33:17 -- common/autotest_common.sh@852 -- # return 0 00:06:33.762 20:33:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:33.762 20:33:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:33.762 20:33:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.762 20:33:17 -- common/autotest_common.sh@10 -- # set +x 00:06:33.762 { 00:06:33.762 "filename": "/tmp/spdk_mem_dump.txt" 00:06:33.762 } 00:06:33.762 20:33:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.762 20:33:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:34.024 DPDK memory size 868.000000 MiB in 1 heap(s) 00:06:34.024 1 heaps totaling size 868.000000 MiB 00:06:34.024 size: 868.000000 MiB heap id: 0 00:06:34.024 end heaps---------- 00:06:34.024 8 mempools totaling size 646.224487 MiB 00:06:34.024 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:34.024 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:34.024 size: 132.629456 MiB name: bdev_io_40352 00:06:34.024 size: 51.011292 MiB name: evtpool_40352 00:06:34.024 size: 50.003479 MiB name: msgpool_40352 00:06:34.024 size: 21.763794 MiB name: PDU_Pool 00:06:34.024 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:34.024 size: 0.026123 MiB name: Session_Pool 00:06:34.024 end mempools------- 00:06:34.024 6 memzones totaling size 4.142822 MiB 00:06:34.024 size: 1.000366 MiB name: RG_ring_0_40352 00:06:34.024 size: 1.000366 MiB name: RG_ring_1_40352 00:06:34.024 size: 1.000366 MiB name: RG_ring_4_40352 00:06:34.024 size: 1.000366 MiB name: RG_ring_5_40352 00:06:34.024 size: 0.125366 MiB name: RG_ring_2_40352 00:06:34.024 size: 0.015991 MiB name: RG_ring_3_40352 00:06:34.024 end memzones------- 00:06:34.024 20:33:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:34.024 heap id: 0 total size: 868.000000 MiB number of busy elements: 265 number of free elements: 18 00:06:34.024 list of free elements. size: 18.351685 MiB 00:06:34.024 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:34.024 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:34.024 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:34.024 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:34.024 element at address: 0x20001c100040 with size: 0.999939 MiB 00:06:34.024 element at address: 0x20001c500040 with size: 0.999939 MiB 00:06:34.024 element at address: 0x20001c600000 with size: 0.999084 MiB 00:06:34.024 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:34.024 element at address: 0x200035200000 with size: 0.994324 MiB 00:06:34.024 element at address: 0x20001be00000 with size: 0.959656 MiB 00:06:34.024 element at address: 0x20001c900040 with size: 0.936401 MiB 00:06:34.024 element at address: 0x200000200000 with size: 0.833862 MiB 00:06:34.024 element at address: 0x20001e000000 with size: 0.563171 MiB 00:06:34.024 element at address: 0x20001c200000 with size: 0.487976 MiB 00:06:34.024 element at address: 0x20001ca00000 with size: 0.485413 MiB 00:06:34.024 element at address: 0x20002b400000 with size: 0.397766 MiB 00:06:34.024 element at address: 0x200013800000 with size: 0.360229 MiB 00:06:34.024 element at address: 0x200003a00000 with size: 0.349548 MiB 00:06:34.024 list of standard malloc elements. size: 199.275513 MiB 00:06:34.024 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:34.024 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:34.024 element at address: 0x20001bffff80 with size: 1.000183 MiB 00:06:34.024 element at address: 0x20001c3fff80 with size: 1.000183 MiB 00:06:34.024 element at address: 0x20001c7fff80 with size: 1.000183 MiB 00:06:34.024 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:34.024 element at address: 0x20001c9eff40 with size: 0.062683 MiB 00:06:34.024 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:34.024 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:34.024 element at address: 0x20001c9efdc0 with size: 0.000366 MiB 00:06:34.024 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:34.024 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:34.024 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:34.024 element at address: 0x200003a597c0 with size: 0.000244 MiB 00:06:34.024 element at address: 0x200003a598c0 with size: 0.000244 MiB 00:06:34.024 element at address: 0x200003a599c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a59ac0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a59bc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a59cc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a59dc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a59ec0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a59fc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5a0c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:34.025 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001385c380 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001385c480 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001385c580 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001385c680 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001385c780 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001385c880 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001385c980 with size: 0.000244 MiB 00:06:34.025 element at address: 0x2000138dccc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001befdd00 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27cec0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27cfc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27d0c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27d1c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27d2c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27d3c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27d4c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27d5c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27d6c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27d7c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27d8c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c27d9c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c2fdd00 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c6ffc40 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c9efbc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001c9efcc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001cabc680 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0902c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0903c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0904c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0905c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0906c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0907c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0908c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0909c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e090ac0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e090bc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e090cc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e090dc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e090ec0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e090fc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0910c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0911c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0912c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0913c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0914c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0915c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0916c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0917c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0918c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0919c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e091ac0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e091bc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e091cc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e091dc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e091ec0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e091fc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0920c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0921c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0922c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0923c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0924c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0925c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0926c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0927c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0928c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0929c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e092ac0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e092bc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e092cc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e092dc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e092ec0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e092fc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0930c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0931c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0932c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0933c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0934c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0935c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0936c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0937c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0938c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0939c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e093ac0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e093bc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e093cc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e093dc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e093ec0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e093fc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0940c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0941c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0942c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0943c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0944c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0945c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0946c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0947c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0948c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e0949c0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e094ac0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e094bc0 with size: 0.000244 MiB 00:06:34.025 element at address: 0x20001e094cc0 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20001e094dc0 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20001e094ec0 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20001e094fc0 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20001e0950c0 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20001e0951c0 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20001e0952c0 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20001e0953c0 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b465d40 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b465e40 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46cb00 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46cd80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46ce80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46cf80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46d080 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46d180 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46d280 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46d380 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46d480 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46d580 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46d680 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46d780 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46d880 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46d980 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46da80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46db80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46dc80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46dd80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46de80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46df80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46e080 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46e180 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46e280 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46e380 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46e480 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46e580 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46e680 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46e780 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46e880 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46e980 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46ea80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46eb80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46ec80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46ed80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46ee80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46ef80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46f080 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46f180 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46f280 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46f380 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46f480 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46f580 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46f680 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46f780 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46f880 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46f980 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46fa80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46fb80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46fc80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46fd80 with size: 0.000244 MiB 00:06:34.026 element at address: 0x20002b46fe80 with size: 0.000244 MiB 00:06:34.026 list of memzone associated elements. size: 650.372803 MiB 00:06:34.026 element at address: 0x20001e0954c0 with size: 211.416809 MiB 00:06:34.026 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:34.026 element at address: 0x20002b46ff80 with size: 157.562622 MiB 00:06:34.026 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:34.026 element at address: 0x2000139def40 with size: 132.129089 MiB 00:06:34.026 associated memzone info: size: 132.128906 MiB name: MP_bdev_io_40352_0 00:06:34.026 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:34.026 associated memzone info: size: 48.002930 MiB name: MP_evtpool_40352_0 00:06:34.026 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:34.026 associated memzone info: size: 48.002930 MiB name: MP_msgpool_40352_0 00:06:34.026 element at address: 0x20001cbbe900 with size: 20.255615 MiB 00:06:34.026 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:34.026 element at address: 0x2000353feb00 with size: 18.005127 MiB 00:06:34.026 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:34.026 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:34.026 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_40352 00:06:34.026 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:34.026 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_40352 00:06:34.026 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:34.026 associated memzone info: size: 1.007996 MiB name: MP_evtpool_40352 00:06:34.026 element at address: 0x20001c2fde00 with size: 1.008179 MiB 00:06:34.026 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:34.026 element at address: 0x20001cabc780 with size: 1.008179 MiB 00:06:34.026 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:34.026 element at address: 0x20001befde00 with size: 1.008179 MiB 00:06:34.026 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:34.026 element at address: 0x2000138dcdc0 with size: 1.008179 MiB 00:06:34.026 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:34.026 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:34.026 associated memzone info: size: 1.000366 MiB name: RG_ring_0_40352 00:06:34.026 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:34.026 associated memzone info: size: 1.000366 MiB name: RG_ring_1_40352 00:06:34.026 element at address: 0x20001c6ffd40 with size: 1.000549 MiB 00:06:34.026 associated memzone info: size: 1.000366 MiB name: RG_ring_4_40352 00:06:34.026 element at address: 0x2000352fe8c0 with size: 1.000549 MiB 00:06:34.026 associated memzone info: size: 1.000366 MiB name: RG_ring_5_40352 00:06:34.026 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:34.026 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_40352 00:06:34.026 element at address: 0x20001c27dac0 with size: 0.500549 MiB 00:06:34.026 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:34.026 element at address: 0x20001385ca80 with size: 0.500549 MiB 00:06:34.026 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:34.026 element at address: 0x20001ca7c440 with size: 0.250549 MiB 00:06:34.026 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:34.026 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:34.026 associated memzone info: size: 0.125366 MiB name: RG_ring_2_40352 00:06:34.026 element at address: 0x20001bef5ac0 with size: 0.031799 MiB 00:06:34.026 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:34.026 element at address: 0x20002b465f40 with size: 0.023804 MiB 00:06:34.026 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:34.026 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:34.026 associated memzone info: size: 0.015991 MiB name: RG_ring_3_40352 00:06:34.026 element at address: 0x20002b46c0c0 with size: 0.002502 MiB 00:06:34.026 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:34.026 element at address: 0x2000002d6880 with size: 0.000366 MiB 00:06:34.026 associated memzone info: size: 0.000183 MiB name: MP_msgpool_40352 00:06:34.026 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:34.026 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_40352 00:06:34.026 element at address: 0x20002b46cc00 with size: 0.000366 MiB 00:06:34.026 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:34.026 20:33:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:34.026 20:33:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 40352 00:06:34.026 20:33:17 -- common/autotest_common.sh@926 -- # '[' -z 40352 ']' 00:06:34.026 20:33:17 -- common/autotest_common.sh@930 -- # kill -0 40352 00:06:34.026 20:33:17 -- common/autotest_common.sh@931 -- # uname 00:06:34.026 20:33:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.026 20:33:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40352 00:06:34.026 killing process with pid 40352 00:06:34.026 20:33:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.026 20:33:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.026 20:33:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40352' 00:06:34.026 20:33:17 -- common/autotest_common.sh@945 -- # kill 40352 00:06:34.026 20:33:17 -- common/autotest_common.sh@950 -- # wait 40352 00:06:36.629 ************************************ 00:06:36.629 END TEST dpdk_mem_utility 00:06:36.629 ************************************ 00:06:36.629 00:06:36.629 real 0m4.066s 00:06:36.629 user 0m3.997s 00:06:36.629 sys 0m0.539s 00:06:36.629 20:33:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.629 20:33:19 -- common/autotest_common.sh@10 -- # set +x 00:06:36.629 20:33:19 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:36.629 20:33:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.629 20:33:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.629 20:33:19 -- common/autotest_common.sh@10 -- # set +x 00:06:36.629 ************************************ 00:06:36.629 START TEST event 00:06:36.629 ************************************ 00:06:36.629 20:33:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:36.629 * Looking for test storage... 00:06:36.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:36.629 20:33:19 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:36.629 20:33:19 -- bdev/nbd_common.sh@6 -- # set -e 00:06:36.629 20:33:19 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:36.629 20:33:19 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:36.629 20:33:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.629 20:33:19 -- common/autotest_common.sh@10 -- # set +x 00:06:36.629 ************************************ 00:06:36.629 START TEST event_perf 00:06:36.629 ************************************ 00:06:36.629 20:33:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:36.629 Running I/O for 1 seconds...[2024-04-15 20:33:19.878006] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:36.629 [2024-04-15 20:33:19.878165] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40475 ] 00:06:36.629 [2024-04-15 20:33:20.053362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.888 [2024-04-15 20:33:20.296504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.888 [2024-04-15 20:33:20.296700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.888 [2024-04-15 20:33:20.296614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.888 [2024-04-15 20:33:20.296713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.266 Running I/O for 1 seconds... 00:06:38.266 lcore 0: 280694 00:06:38.266 lcore 1: 280694 00:06:38.266 lcore 2: 280695 00:06:38.266 lcore 3: 280695 00:06:38.266 done. 00:06:38.266 ************************************ 00:06:38.266 END TEST event_perf 00:06:38.266 ************************************ 00:06:38.266 00:06:38.266 real 0m1.914s 00:06:38.266 user 0m4.690s 00:06:38.266 sys 0m0.130s 00:06:38.266 20:33:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.266 20:33:21 -- common/autotest_common.sh@10 -- # set +x 00:06:38.525 20:33:21 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:38.525 20:33:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:38.525 20:33:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.525 20:33:21 -- common/autotest_common.sh@10 -- # set +x 00:06:38.525 ************************************ 00:06:38.525 START TEST event_reactor 00:06:38.525 ************************************ 00:06:38.525 20:33:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:38.525 [2024-04-15 20:33:21.855814] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:38.525 [2024-04-15 20:33:21.855958] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40528 ] 00:06:38.791 [2024-04-15 20:33:22.029491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.791 [2024-04-15 20:33:22.259654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.207 test_start 00:06:40.207 oneshot 00:06:40.207 tick 100 00:06:40.207 tick 100 00:06:40.207 tick 250 00:06:40.207 tick 100 00:06:40.207 tick 100 00:06:40.207 tick 250 00:06:40.207 tick 500 00:06:40.207 tick 100 00:06:40.207 tick 100 00:06:40.207 tick 100 00:06:40.207 tick 250 00:06:40.207 tick 100 00:06:40.207 tick 100 00:06:40.208 test_end 00:06:40.208 ************************************ 00:06:40.208 END TEST event_reactor 00:06:40.208 ************************************ 00:06:40.208 00:06:40.208 real 0m1.853s 00:06:40.208 user 0m1.646s 00:06:40.208 sys 0m0.106s 00:06:40.208 20:33:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.208 20:33:23 -- common/autotest_common.sh@10 -- # set +x 00:06:40.468 20:33:23 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:40.468 20:33:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:40.468 20:33:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.468 20:33:23 -- common/autotest_common.sh@10 -- # set +x 00:06:40.468 ************************************ 00:06:40.468 START TEST event_reactor_perf 00:06:40.468 ************************************ 00:06:40.468 20:33:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:40.468 [2024-04-15 20:33:23.772802] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:40.468 [2024-04-15 20:33:23.772961] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40578 ] 00:06:40.468 [2024-04-15 20:33:23.920763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.727 [2024-04-15 20:33:24.174474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.105 test_start 00:06:42.105 test_end 00:06:42.105 Performance: 753688 events per second 00:06:42.105 00:06:42.105 real 0m1.826s 00:06:42.105 user 0m1.624s 00:06:42.105 sys 0m0.101s 00:06:42.105 ************************************ 00:06:42.105 END TEST event_reactor_perf 00:06:42.105 ************************************ 00:06:42.105 20:33:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.105 20:33:25 -- common/autotest_common.sh@10 -- # set +x 00:06:42.364 20:33:25 -- event/event.sh@49 -- # uname -s 00:06:42.364 20:33:25 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:42.364 20:33:25 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:42.364 20:33:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.364 20:33:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.364 20:33:25 -- common/autotest_common.sh@10 -- # set +x 00:06:42.364 ************************************ 00:06:42.364 START TEST event_scheduler 00:06:42.364 ************************************ 00:06:42.364 20:33:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:42.364 * Looking for test storage... 00:06:42.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:42.364 20:33:25 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:42.364 20:33:25 -- scheduler/scheduler.sh@35 -- # scheduler_pid=40667 00:06:42.364 20:33:25 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.364 20:33:25 -- scheduler/scheduler.sh@37 -- # waitforlisten 40667 00:06:42.364 20:33:25 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:42.364 20:33:25 -- common/autotest_common.sh@819 -- # '[' -z 40667 ']' 00:06:42.364 20:33:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.364 20:33:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.364 20:33:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.364 20:33:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.364 20:33:25 -- common/autotest_common.sh@10 -- # set +x 00:06:42.623 [2024-04-15 20:33:25.905471] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:42.623 [2024-04-15 20:33:25.905849] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40667 ] 00:06:42.623 [2024-04-15 20:33:26.063881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.883 [2024-04-15 20:33:26.273818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.883 [2024-04-15 20:33:26.273984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.883 [2024-04-15 20:33:26.274177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.883 [2024-04-15 20:33:26.274188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.450 20:33:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.450 20:33:26 -- common/autotest_common.sh@852 -- # return 0 00:06:43.450 20:33:26 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:43.450 20:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.450 20:33:26 -- common/autotest_common.sh@10 -- # set +x 00:06:43.450 POWER: Env isn't set yet! 00:06:43.450 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:43.450 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:43.450 POWER: Cannot set governor of lcore 0 to userspace 00:06:43.450 POWER: Attempting to initialise PSTAT power management... 00:06:43.450 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:43.450 POWER: Cannot set governor of lcore 0 to performance 00:06:43.450 POWER: Attempting to initialise AMD PSTATE power management... 00:06:43.450 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:43.450 POWER: Cannot set governor of lcore 0 to userspace 00:06:43.450 POWER: Attempting to initialise CPPC power management... 00:06:43.450 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:43.450 POWER: Cannot set governor of lcore 0 to userspace 00:06:43.450 POWER: Attempting to initialise VM power management... 00:06:43.450 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:43.450 POWER: Unable to set Power Management Environment for lcore 0 00:06:43.450 [2024-04-15 20:33:26.684130] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:43.450 [2024-04-15 20:33:26.684175] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:43.450 [2024-04-15 20:33:26.684220] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:43.450 20:33:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.450 20:33:26 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:43.450 20:33:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.450 20:33:26 -- common/autotest_common.sh@10 -- # set +x 00:06:43.710 [2024-04-15 20:33:27.049372] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:43.710 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.710 20:33:27 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:43.710 20:33:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:43.710 20:33:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.710 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.710 ************************************ 00:06:43.710 START TEST scheduler_create_thread 00:06:43.710 ************************************ 00:06:43.710 20:33:27 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:43.710 20:33:27 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:43.710 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.710 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.710 2 00:06:43.710 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.710 20:33:27 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:43.710 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.711 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 3 00:06:43.711 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.711 20:33:27 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:43.711 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.711 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 4 00:06:43.711 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.711 20:33:27 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:43.711 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.711 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 5 00:06:43.711 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.711 20:33:27 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:43.711 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.711 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 6 00:06:43.711 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.711 20:33:27 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:43.711 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.711 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 7 00:06:43.711 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.711 20:33:27 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:43.711 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.711 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 8 00:06:43.711 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.711 20:33:27 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:43.711 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.711 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 9 00:06:43.711 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.711 20:33:27 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:43.711 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.711 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 10 00:06:43.711 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.711 20:33:27 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:43.711 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.711 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:43.711 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.711 20:33:27 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:43.711 20:33:27 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:43.711 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.711 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:44.277 20:33:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.277 20:33:27 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:44.277 20:33:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.277 20:33:27 -- common/autotest_common.sh@10 -- # set +x 00:06:45.690 20:33:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:45.690 20:33:29 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:45.691 20:33:29 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:45.691 20:33:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:45.691 20:33:29 -- common/autotest_common.sh@10 -- # set +x 00:06:47.067 ************************************ 00:06:47.067 END TEST scheduler_create_thread 00:06:47.067 ************************************ 00:06:47.067 20:33:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.067 00:06:47.067 real 0m3.094s 00:06:47.067 user 0m0.017s 00:06:47.067 sys 0m0.005s 00:06:47.067 20:33:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.067 20:33:30 -- common/autotest_common.sh@10 -- # set +x 00:06:47.067 20:33:30 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:47.067 20:33:30 -- scheduler/scheduler.sh@46 -- # killprocess 40667 00:06:47.067 20:33:30 -- common/autotest_common.sh@926 -- # '[' -z 40667 ']' 00:06:47.067 20:33:30 -- common/autotest_common.sh@930 -- # kill -0 40667 00:06:47.067 20:33:30 -- common/autotest_common.sh@931 -- # uname 00:06:47.067 20:33:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:47.067 20:33:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40667 00:06:47.067 killing process with pid 40667 00:06:47.067 20:33:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:47.067 20:33:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:47.067 20:33:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40667' 00:06:47.067 20:33:30 -- common/autotest_common.sh@945 -- # kill 40667 00:06:47.067 20:33:30 -- common/autotest_common.sh@950 -- # wait 40667 00:06:47.067 [2024-04-15 20:33:30.540909] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:48.443 ************************************ 00:06:48.443 END TEST event_scheduler 00:06:48.443 ************************************ 00:06:48.443 00:06:48.443 real 0m6.238s 00:06:48.443 user 0m12.104s 00:06:48.443 sys 0m0.424s 00:06:48.443 20:33:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.443 20:33:31 -- common/autotest_common.sh@10 -- # set +x 00:06:48.443 20:33:31 -- event/event.sh@51 -- # modprobe -n nbd 00:06:48.443 modprobe: FATAL: Module nbd not found. 00:06:48.443 20:33:31 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:48.443 20:33:31 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:48.443 20:33:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.443 20:33:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.443 20:33:31 -- common/autotest_common.sh@10 -- # set +x 00:06:48.443 ************************************ 00:06:48.443 START TEST cpu_locks 00:06:48.443 ************************************ 00:06:48.443 20:33:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:48.700 * Looking for test storage... 00:06:48.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:48.700 20:33:32 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:48.700 20:33:32 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:48.700 20:33:32 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:48.700 20:33:32 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:48.700 20:33:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:48.701 20:33:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.701 20:33:32 -- common/autotest_common.sh@10 -- # set +x 00:06:48.701 ************************************ 00:06:48.701 START TEST default_locks 00:06:48.701 ************************************ 00:06:48.701 20:33:32 -- common/autotest_common.sh@1104 -- # default_locks 00:06:48.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.701 20:33:32 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=40841 00:06:48.701 20:33:32 -- event/cpu_locks.sh@47 -- # waitforlisten 40841 00:06:48.701 20:33:32 -- common/autotest_common.sh@819 -- # '[' -z 40841 ']' 00:06:48.701 20:33:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.701 20:33:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.701 20:33:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.701 20:33:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.701 20:33:32 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.701 20:33:32 -- common/autotest_common.sh@10 -- # set +x 00:06:48.701 [2024-04-15 20:33:32.176144] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:48.701 [2024-04-15 20:33:32.176315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40841 ] 00:06:48.957 [2024-04-15 20:33:32.338702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.214 [2024-04-15 20:33:32.535754] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:49.214 [2024-04-15 20:33:32.535959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.152 20:33:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:50.152 20:33:33 -- common/autotest_common.sh@852 -- # return 0 00:06:50.152 20:33:33 -- event/cpu_locks.sh@49 -- # locks_exist 40841 00:06:50.152 20:33:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.411 20:33:33 -- event/cpu_locks.sh@22 -- # lslocks -p 40841 00:06:51.347 20:33:34 -- event/cpu_locks.sh@50 -- # killprocess 40841 00:06:51.347 20:33:34 -- common/autotest_common.sh@926 -- # '[' -z 40841 ']' 00:06:51.347 20:33:34 -- common/autotest_common.sh@930 -- # kill -0 40841 00:06:51.347 20:33:34 -- common/autotest_common.sh@931 -- # uname 00:06:51.347 20:33:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:51.347 20:33:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40841 00:06:51.347 20:33:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:51.347 20:33:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:51.347 20:33:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40841' 00:06:51.347 killing process with pid 40841 00:06:51.347 20:33:34 -- common/autotest_common.sh@945 -- # kill 40841 00:06:51.347 20:33:34 -- common/autotest_common.sh@950 -- # wait 40841 00:06:53.253 20:33:36 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 40841 00:06:53.253 20:33:36 -- common/autotest_common.sh@640 -- # local es=0 00:06:53.253 20:33:36 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 40841 00:06:53.253 20:33:36 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:53.253 20:33:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.253 20:33:36 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:53.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.253 ERROR: process (pid: 40841) is no longer running 00:06:53.253 20:33:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:53.253 20:33:36 -- common/autotest_common.sh@643 -- # waitforlisten 40841 00:06:53.253 20:33:36 -- common/autotest_common.sh@819 -- # '[' -z 40841 ']' 00:06:53.253 20:33:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.253 20:33:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:53.253 20:33:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.253 20:33:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:53.253 20:33:36 -- common/autotest_common.sh@10 -- # set +x 00:06:53.253 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (40841) - No such process 00:06:53.253 20:33:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:53.253 20:33:36 -- common/autotest_common.sh@852 -- # return 1 00:06:53.253 20:33:36 -- common/autotest_common.sh@643 -- # es=1 00:06:53.253 20:33:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:53.253 20:33:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:53.253 20:33:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:53.253 20:33:36 -- event/cpu_locks.sh@54 -- # no_locks 00:06:53.253 20:33:36 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:06:53.253 20:33:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.253 20:33:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.253 00:06:53.253 real 0m4.696s 00:06:53.253 user 0m4.941s 00:06:53.253 sys 0m1.076s 00:06:53.253 20:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.253 20:33:36 -- common/autotest_common.sh@10 -- # set +x 00:06:53.253 ************************************ 00:06:53.253 END TEST default_locks 00:06:53.253 ************************************ 00:06:53.512 20:33:36 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:53.512 20:33:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:53.512 20:33:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.512 20:33:36 -- common/autotest_common.sh@10 -- # set +x 00:06:53.512 ************************************ 00:06:53.512 START TEST default_locks_via_rpc 00:06:53.512 ************************************ 00:06:53.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.512 20:33:36 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:53.512 20:33:36 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=40930 00:06:53.512 20:33:36 -- event/cpu_locks.sh@63 -- # waitforlisten 40930 00:06:53.512 20:33:36 -- common/autotest_common.sh@819 -- # '[' -z 40930 ']' 00:06:53.512 20:33:36 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.512 20:33:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.512 20:33:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:53.512 20:33:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.512 20:33:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:53.512 20:33:36 -- common/autotest_common.sh@10 -- # set +x 00:06:53.512 [2024-04-15 20:33:36.935256] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:53.512 [2024-04-15 20:33:36.935421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid40930 ] 00:06:53.772 [2024-04-15 20:33:37.091797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.772 [2024-04-15 20:33:37.265090] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.772 [2024-04-15 20:33:37.265283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.168 20:33:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:55.168 20:33:38 -- common/autotest_common.sh@852 -- # return 0 00:06:55.168 20:33:38 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:55.168 20:33:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.168 20:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:55.168 20:33:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.168 20:33:38 -- event/cpu_locks.sh@67 -- # no_locks 00:06:55.168 20:33:38 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:06:55.168 20:33:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:55.168 20:33:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:55.168 20:33:38 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:55.168 20:33:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:55.168 20:33:38 -- common/autotest_common.sh@10 -- # set +x 00:06:55.168 20:33:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:55.168 20:33:38 -- event/cpu_locks.sh@71 -- # locks_exist 40930 00:06:55.168 20:33:38 -- event/cpu_locks.sh@22 -- # lslocks -p 40930 00:06:55.168 20:33:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.106 20:33:39 -- event/cpu_locks.sh@73 -- # killprocess 40930 00:06:56.106 20:33:39 -- common/autotest_common.sh@926 -- # '[' -z 40930 ']' 00:06:56.106 20:33:39 -- common/autotest_common.sh@930 -- # kill -0 40930 00:06:56.106 20:33:39 -- common/autotest_common.sh@931 -- # uname 00:06:56.106 20:33:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:56.106 20:33:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 40930 00:06:56.106 20:33:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:56.106 20:33:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:56.106 killing process with pid 40930 00:06:56.106 20:33:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 40930' 00:06:56.106 20:33:39 -- common/autotest_common.sh@945 -- # kill 40930 00:06:56.106 20:33:39 -- common/autotest_common.sh@950 -- # wait 40930 00:06:58.011 ************************************ 00:06:58.011 END TEST default_locks_via_rpc 00:06:58.011 ************************************ 00:06:58.011 00:06:58.011 real 0m4.642s 00:06:58.011 user 0m4.803s 00:06:58.011 sys 0m1.089s 00:06:58.011 20:33:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.011 20:33:41 -- common/autotest_common.sh@10 -- # set +x 00:06:58.011 20:33:41 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:58.011 20:33:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.011 20:33:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.011 20:33:41 -- common/autotest_common.sh@10 -- # set +x 00:06:58.011 ************************************ 00:06:58.011 START TEST non_locking_app_on_locked_coremask 00:06:58.011 ************************************ 00:06:58.011 20:33:41 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:58.011 20:33:41 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=41023 00:06:58.011 20:33:41 -- event/cpu_locks.sh@81 -- # waitforlisten 41023 /var/tmp/spdk.sock 00:06:58.011 20:33:41 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.011 20:33:41 -- common/autotest_common.sh@819 -- # '[' -z 41023 ']' 00:06:58.011 20:33:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.011 20:33:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:58.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.011 20:33:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.011 20:33:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:58.011 20:33:41 -- common/autotest_common.sh@10 -- # set +x 00:06:58.271 [2024-04-15 20:33:41.636662] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:58.271 [2024-04-15 20:33:41.636828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41023 ] 00:06:58.530 [2024-04-15 20:33:41.792574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.530 [2024-04-15 20:33:41.967226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.530 [2024-04-15 20:33:41.967431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.907 20:33:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:59.907 20:33:43 -- common/autotest_common.sh@852 -- # return 0 00:06:59.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.907 20:33:43 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=41058 00:06:59.907 20:33:43 -- event/cpu_locks.sh@85 -- # waitforlisten 41058 /var/tmp/spdk2.sock 00:06:59.907 20:33:43 -- common/autotest_common.sh@819 -- # '[' -z 41058 ']' 00:06:59.907 20:33:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.907 20:33:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:59.907 20:33:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.907 20:33:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:59.907 20:33:43 -- common/autotest_common.sh@10 -- # set +x 00:06:59.907 20:33:43 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:59.907 [2024-04-15 20:33:43.227162] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:59.907 [2024-04-15 20:33:43.227335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41058 ] 00:07:00.166 [2024-04-15 20:33:43.407707] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.166 [2024-04-15 20:33:43.407776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.425 [2024-04-15 20:33:43.739551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:00.425 [2024-04-15 20:33:43.739759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.802 20:33:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:01.802 20:33:45 -- common/autotest_common.sh@852 -- # return 0 00:07:01.802 20:33:45 -- event/cpu_locks.sh@87 -- # locks_exist 41023 00:07:01.802 20:33:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.802 20:33:45 -- event/cpu_locks.sh@22 -- # lslocks -p 41023 00:07:03.744 20:33:46 -- event/cpu_locks.sh@89 -- # killprocess 41023 00:07:03.744 20:33:46 -- common/autotest_common.sh@926 -- # '[' -z 41023 ']' 00:07:03.744 20:33:46 -- common/autotest_common.sh@930 -- # kill -0 41023 00:07:03.744 20:33:46 -- common/autotest_common.sh@931 -- # uname 00:07:03.744 20:33:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:03.744 20:33:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41023 00:07:03.744 killing process with pid 41023 00:07:03.744 20:33:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:03.744 20:33:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:03.744 20:33:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41023' 00:07:03.744 20:33:46 -- common/autotest_common.sh@945 -- # kill 41023 00:07:03.744 20:33:46 -- common/autotest_common.sh@950 -- # wait 41023 00:07:07.936 20:33:51 -- event/cpu_locks.sh@90 -- # killprocess 41058 00:07:07.936 20:33:51 -- common/autotest_common.sh@926 -- # '[' -z 41058 ']' 00:07:07.936 20:33:51 -- common/autotest_common.sh@930 -- # kill -0 41058 00:07:07.936 20:33:51 -- common/autotest_common.sh@931 -- # uname 00:07:07.936 20:33:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:07.936 20:33:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41058 00:07:07.936 20:33:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:07.936 killing process with pid 41058 00:07:07.936 20:33:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:07.936 20:33:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41058' 00:07:07.936 20:33:51 -- common/autotest_common.sh@945 -- # kill 41058 00:07:07.936 20:33:51 -- common/autotest_common.sh@950 -- # wait 41058 00:07:10.482 00:07:10.482 real 0m12.088s 00:07:10.482 user 0m12.754s 00:07:10.482 sys 0m2.209s 00:07:10.482 20:33:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.482 20:33:53 -- common/autotest_common.sh@10 -- # set +x 00:07:10.482 ************************************ 00:07:10.482 END TEST non_locking_app_on_locked_coremask 00:07:10.482 ************************************ 00:07:10.482 20:33:53 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:10.482 20:33:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:10.482 20:33:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.482 20:33:53 -- common/autotest_common.sh@10 -- # set +x 00:07:10.482 ************************************ 00:07:10.482 START TEST locking_app_on_unlocked_coremask 00:07:10.482 ************************************ 00:07:10.482 20:33:53 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:07:10.482 20:33:53 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=41224 00:07:10.482 20:33:53 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:10.482 20:33:53 -- event/cpu_locks.sh@99 -- # waitforlisten 41224 /var/tmp/spdk.sock 00:07:10.482 20:33:53 -- common/autotest_common.sh@819 -- # '[' -z 41224 ']' 00:07:10.482 20:33:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.482 20:33:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:10.482 20:33:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.482 20:33:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:10.482 20:33:53 -- common/autotest_common.sh@10 -- # set +x 00:07:10.482 [2024-04-15 20:33:53.767092] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:10.482 [2024-04-15 20:33:53.767255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41224 ] 00:07:10.482 [2024-04-15 20:33:53.916002] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.482 [2024-04-15 20:33:53.916094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.741 [2024-04-15 20:33:54.090781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.741 [2024-04-15 20:33:54.090985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.118 20:33:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:12.118 20:33:55 -- common/autotest_common.sh@852 -- # return 0 00:07:12.118 20:33:55 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=41259 00:07:12.118 20:33:55 -- event/cpu_locks.sh@103 -- # waitforlisten 41259 /var/tmp/spdk2.sock 00:07:12.118 20:33:55 -- common/autotest_common.sh@819 -- # '[' -z 41259 ']' 00:07:12.118 20:33:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.118 20:33:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:12.118 20:33:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.118 20:33:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:12.118 20:33:55 -- common/autotest_common.sh@10 -- # set +x 00:07:12.118 20:33:55 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:12.118 [2024-04-15 20:33:55.398556] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:12.118 [2024-04-15 20:33:55.398763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41259 ] 00:07:12.118 [2024-04-15 20:33:55.555770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.687 [2024-04-15 20:33:55.903074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.687 [2024-04-15 20:33:55.903267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.144 20:33:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:14.144 20:33:57 -- common/autotest_common.sh@852 -- # return 0 00:07:14.144 20:33:57 -- event/cpu_locks.sh@105 -- # locks_exist 41259 00:07:14.144 20:33:57 -- event/cpu_locks.sh@22 -- # lslocks -p 41259 00:07:14.144 20:33:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.047 20:33:59 -- event/cpu_locks.sh@107 -- # killprocess 41224 00:07:16.047 20:33:59 -- common/autotest_common.sh@926 -- # '[' -z 41224 ']' 00:07:16.047 20:33:59 -- common/autotest_common.sh@930 -- # kill -0 41224 00:07:16.047 20:33:59 -- common/autotest_common.sh@931 -- # uname 00:07:16.047 20:33:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:16.047 20:33:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41224 00:07:16.047 killing process with pid 41224 00:07:16.047 20:33:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:16.047 20:33:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:16.047 20:33:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41224' 00:07:16.047 20:33:59 -- common/autotest_common.sh@945 -- # kill 41224 00:07:16.047 20:33:59 -- common/autotest_common.sh@950 -- # wait 41224 00:07:20.290 20:34:03 -- event/cpu_locks.sh@108 -- # killprocess 41259 00:07:20.290 20:34:03 -- common/autotest_common.sh@926 -- # '[' -z 41259 ']' 00:07:20.290 20:34:03 -- common/autotest_common.sh@930 -- # kill -0 41259 00:07:20.290 20:34:03 -- common/autotest_common.sh@931 -- # uname 00:07:20.290 20:34:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:20.290 20:34:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41259 00:07:20.290 killing process with pid 41259 00:07:20.291 20:34:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:20.291 20:34:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:20.291 20:34:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41259' 00:07:20.291 20:34:03 -- common/autotest_common.sh@945 -- # kill 41259 00:07:20.291 20:34:03 -- common/autotest_common.sh@950 -- # wait 41259 00:07:22.243 ************************************ 00:07:22.243 END TEST locking_app_on_unlocked_coremask 00:07:22.243 ************************************ 00:07:22.243 00:07:22.243 real 0m11.920s 00:07:22.243 user 0m12.732s 00:07:22.243 sys 0m2.244s 00:07:22.243 20:34:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.243 20:34:05 -- common/autotest_common.sh@10 -- # set +x 00:07:22.243 20:34:05 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:22.244 20:34:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:22.244 20:34:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.244 20:34:05 -- common/autotest_common.sh@10 -- # set +x 00:07:22.244 ************************************ 00:07:22.244 START TEST locking_app_on_locked_coremask 00:07:22.244 ************************************ 00:07:22.244 20:34:05 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:07:22.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.244 20:34:05 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=41425 00:07:22.244 20:34:05 -- event/cpu_locks.sh@116 -- # waitforlisten 41425 /var/tmp/spdk.sock 00:07:22.244 20:34:05 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.244 20:34:05 -- common/autotest_common.sh@819 -- # '[' -z 41425 ']' 00:07:22.244 20:34:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.244 20:34:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:22.244 20:34:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.244 20:34:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:22.244 20:34:05 -- common/autotest_common.sh@10 -- # set +x 00:07:22.503 [2024-04-15 20:34:05.760403] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:22.503 [2024-04-15 20:34:05.760574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41425 ] 00:07:22.503 [2024-04-15 20:34:05.910116] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.763 [2024-04-15 20:34:06.080707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.763 [2024-04-15 20:34:06.080899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.699 20:34:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:23.699 20:34:07 -- common/autotest_common.sh@852 -- # return 0 00:07:23.699 20:34:07 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=41448 00:07:23.699 20:34:07 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 41448 /var/tmp/spdk2.sock 00:07:23.699 20:34:07 -- common/autotest_common.sh@640 -- # local es=0 00:07:23.699 20:34:07 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 41448 /var/tmp/spdk2.sock 00:07:23.699 20:34:07 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:23.700 20:34:07 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:23.700 20:34:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:23.700 20:34:07 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:23.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.700 20:34:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:23.700 20:34:07 -- common/autotest_common.sh@643 -- # waitforlisten 41448 /var/tmp/spdk2.sock 00:07:23.700 20:34:07 -- common/autotest_common.sh@819 -- # '[' -z 41448 ']' 00:07:23.700 20:34:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.700 20:34:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:23.700 20:34:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.700 20:34:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:23.700 20:34:07 -- common/autotest_common.sh@10 -- # set +x 00:07:23.957 [2024-04-15 20:34:07.332335] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:23.957 [2024-04-15 20:34:07.332572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41448 ] 00:07:24.215 [2024-04-15 20:34:07.509176] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 41425 has claimed it. 00:07:24.215 [2024-04-15 20:34:07.509248] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:24.473 ERROR: process (pid: 41448) is no longer running 00:07:24.473 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (41448) - No such process 00:07:24.473 20:34:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:24.473 20:34:07 -- common/autotest_common.sh@852 -- # return 1 00:07:24.473 20:34:07 -- common/autotest_common.sh@643 -- # es=1 00:07:24.473 20:34:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:24.473 20:34:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:24.473 20:34:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:24.473 20:34:07 -- event/cpu_locks.sh@122 -- # locks_exist 41425 00:07:24.473 20:34:07 -- event/cpu_locks.sh@22 -- # lslocks -p 41425 00:07:24.473 20:34:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.434 20:34:08 -- event/cpu_locks.sh@124 -- # killprocess 41425 00:07:25.434 20:34:08 -- common/autotest_common.sh@926 -- # '[' -z 41425 ']' 00:07:25.434 20:34:08 -- common/autotest_common.sh@930 -- # kill -0 41425 00:07:25.434 20:34:08 -- common/autotest_common.sh@931 -- # uname 00:07:25.434 20:34:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:25.434 20:34:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41425 00:07:25.434 killing process with pid 41425 00:07:25.434 20:34:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:25.434 20:34:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:25.434 20:34:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41425' 00:07:25.434 20:34:08 -- common/autotest_common.sh@945 -- # kill 41425 00:07:25.434 20:34:08 -- common/autotest_common.sh@950 -- # wait 41425 00:07:27.960 ************************************ 00:07:27.960 END TEST locking_app_on_locked_coremask 00:07:27.960 ************************************ 00:07:27.960 00:07:27.960 real 0m5.255s 00:07:27.960 user 0m5.634s 00:07:27.960 sys 0m1.184s 00:07:27.960 20:34:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.960 20:34:10 -- common/autotest_common.sh@10 -- # set +x 00:07:27.961 20:34:10 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:27.961 20:34:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:27.961 20:34:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.961 20:34:10 -- common/autotest_common.sh@10 -- # set +x 00:07:27.961 ************************************ 00:07:27.961 START TEST locking_overlapped_coremask 00:07:27.961 ************************************ 00:07:27.961 20:34:10 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:07:27.961 20:34:10 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=41530 00:07:27.961 20:34:10 -- event/cpu_locks.sh@133 -- # waitforlisten 41530 /var/tmp/spdk.sock 00:07:27.961 20:34:10 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:27.961 20:34:10 -- common/autotest_common.sh@819 -- # '[' -z 41530 ']' 00:07:27.961 20:34:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.961 20:34:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:27.961 20:34:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.961 20:34:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:27.961 20:34:10 -- common/autotest_common.sh@10 -- # set +x 00:07:27.961 [2024-04-15 20:34:11.081087] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:27.961 [2024-04-15 20:34:11.081248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41530 ] 00:07:27.961 [2024-04-15 20:34:11.239903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.961 [2024-04-15 20:34:11.417057] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:27.961 [2024-04-15 20:34:11.417322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.961 [2024-04-15 20:34:11.417522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.961 [2024-04-15 20:34:11.417527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.333 20:34:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:29.333 20:34:12 -- common/autotest_common.sh@852 -- # return 0 00:07:29.333 20:34:12 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=41555 00:07:29.333 20:34:12 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 41555 /var/tmp/spdk2.sock 00:07:29.333 20:34:12 -- common/autotest_common.sh@640 -- # local es=0 00:07:29.333 20:34:12 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 41555 /var/tmp/spdk2.sock 00:07:29.333 20:34:12 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:29.333 20:34:12 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:07:29.333 20:34:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:29.333 20:34:12 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:07:29.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.333 20:34:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:29.333 20:34:12 -- common/autotest_common.sh@643 -- # waitforlisten 41555 /var/tmp/spdk2.sock 00:07:29.333 20:34:12 -- common/autotest_common.sh@819 -- # '[' -z 41555 ']' 00:07:29.333 20:34:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.333 20:34:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:29.333 20:34:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.333 20:34:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:29.333 20:34:12 -- common/autotest_common.sh@10 -- # set +x 00:07:29.333 [2024-04-15 20:34:12.627500] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:29.333 [2024-04-15 20:34:12.627684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41555 ] 00:07:29.593 [2024-04-15 20:34:12.846518] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 41530 has claimed it. 00:07:29.593 [2024-04-15 20:34:12.846607] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:29.851 ERROR: process (pid: 41555) is no longer running 00:07:29.851 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (41555) - No such process 00:07:29.851 20:34:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:29.851 20:34:13 -- common/autotest_common.sh@852 -- # return 1 00:07:29.851 20:34:13 -- common/autotest_common.sh@643 -- # es=1 00:07:29.851 20:34:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:29.851 20:34:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:29.851 20:34:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:29.851 20:34:13 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:29.851 20:34:13 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.851 20:34:13 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.851 20:34:13 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.851 20:34:13 -- event/cpu_locks.sh@141 -- # killprocess 41530 00:07:29.851 20:34:13 -- common/autotest_common.sh@926 -- # '[' -z 41530 ']' 00:07:29.851 20:34:13 -- common/autotest_common.sh@930 -- # kill -0 41530 00:07:29.851 20:34:13 -- common/autotest_common.sh@931 -- # uname 00:07:29.851 20:34:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:29.851 20:34:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41530 00:07:29.851 killing process with pid 41530 00:07:29.851 20:34:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:29.851 20:34:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:29.851 20:34:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41530' 00:07:29.851 20:34:13 -- common/autotest_common.sh@945 -- # kill 41530 00:07:29.851 20:34:13 -- common/autotest_common.sh@950 -- # wait 41530 00:07:32.393 00:07:32.393 real 0m4.453s 00:07:32.393 user 0m11.695s 00:07:32.393 sys 0m0.560s 00:07:32.393 20:34:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.393 ************************************ 00:07:32.393 END TEST locking_overlapped_coremask 00:07:32.393 ************************************ 00:07:32.393 20:34:15 -- common/autotest_common.sh@10 -- # set +x 00:07:32.393 20:34:15 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:32.393 20:34:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.393 20:34:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.393 20:34:15 -- common/autotest_common.sh@10 -- # set +x 00:07:32.393 ************************************ 00:07:32.393 START TEST locking_overlapped_coremask_via_rpc 00:07:32.393 ************************************ 00:07:32.393 20:34:15 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:07:32.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.393 20:34:15 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=41627 00:07:32.393 20:34:15 -- event/cpu_locks.sh@149 -- # waitforlisten 41627 /var/tmp/spdk.sock 00:07:32.393 20:34:15 -- common/autotest_common.sh@819 -- # '[' -z 41627 ']' 00:07:32.393 20:34:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.393 20:34:15 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:32.393 20:34:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:32.393 20:34:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.393 20:34:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:32.393 20:34:15 -- common/autotest_common.sh@10 -- # set +x 00:07:32.393 [2024-04-15 20:34:15.604161] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:32.393 [2024-04-15 20:34:15.604333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41627 ] 00:07:32.393 [2024-04-15 20:34:15.766816] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:32.393 [2024-04-15 20:34:15.766918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.651 [2024-04-15 20:34:15.939427] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:32.651 [2024-04-15 20:34:15.940027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.651 [2024-04-15 20:34:15.940029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.651 [2024-04-15 20:34:15.940138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.586 20:34:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:33.586 20:34:17 -- common/autotest_common.sh@852 -- # return 0 00:07:33.586 20:34:17 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=41659 00:07:33.586 20:34:17 -- event/cpu_locks.sh@153 -- # waitforlisten 41659 /var/tmp/spdk2.sock 00:07:33.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.586 20:34:17 -- common/autotest_common.sh@819 -- # '[' -z 41659 ']' 00:07:33.586 20:34:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.586 20:34:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:33.586 20:34:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.586 20:34:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:33.586 20:34:17 -- common/autotest_common.sh@10 -- # set +x 00:07:33.586 20:34:17 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:33.844 [2024-04-15 20:34:17.181141] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:33.844 [2024-04-15 20:34:17.181305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41659 ] 00:07:34.102 [2024-04-15 20:34:17.370054] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:34.102 [2024-04-15 20:34:17.370126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.360 [2024-04-15 20:34:17.715813] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:34.360 [2024-04-15 20:34:17.725749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.360 [2024-04-15 20:34:17.737731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:34.360 [2024-04-15 20:34:17.738661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.260 20:34:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.260 20:34:19 -- common/autotest_common.sh@852 -- # return 0 00:07:36.260 20:34:19 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:36.260 20:34:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.260 20:34:19 -- common/autotest_common.sh@10 -- # set +x 00:07:36.260 20:34:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.260 20:34:19 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.260 20:34:19 -- common/autotest_common.sh@640 -- # local es=0 00:07:36.260 20:34:19 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.260 20:34:19 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:07:36.260 20:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:36.260 20:34:19 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:07:36.260 20:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:36.260 20:34:19 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:36.260 20:34:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.260 20:34:19 -- common/autotest_common.sh@10 -- # set +x 00:07:36.260 [2024-04-15 20:34:19.301876] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 41627 has claimed it. 00:07:36.260 request: 00:07:36.260 { 00:07:36.260 "method": "framework_enable_cpumask_locks", 00:07:36.260 "req_id": 1 00:07:36.260 } 00:07:36.260 Got JSON-RPC error response 00:07:36.260 response: 00:07:36.260 { 00:07:36.260 "code": -32603, 00:07:36.260 "message": "Failed to claim CPU core: 2" 00:07:36.260 } 00:07:36.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.260 20:34:19 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:07:36.260 20:34:19 -- common/autotest_common.sh@643 -- # es=1 00:07:36.261 20:34:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:36.261 20:34:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:36.261 20:34:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:36.261 20:34:19 -- event/cpu_locks.sh@158 -- # waitforlisten 41627 /var/tmp/spdk.sock 00:07:36.261 20:34:19 -- common/autotest_common.sh@819 -- # '[' -z 41627 ']' 00:07:36.261 20:34:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.261 20:34:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.261 20:34:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.261 20:34:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.261 20:34:19 -- common/autotest_common.sh@10 -- # set +x 00:07:36.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.261 20:34:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.261 20:34:19 -- common/autotest_common.sh@852 -- # return 0 00:07:36.261 20:34:19 -- event/cpu_locks.sh@159 -- # waitforlisten 41659 /var/tmp/spdk2.sock 00:07:36.261 20:34:19 -- common/autotest_common.sh@819 -- # '[' -z 41659 ']' 00:07:36.261 20:34:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.261 20:34:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.261 20:34:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.261 20:34:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.261 20:34:19 -- common/autotest_common.sh@10 -- # set +x 00:07:36.261 ************************************ 00:07:36.261 END TEST locking_overlapped_coremask_via_rpc 00:07:36.261 ************************************ 00:07:36.261 20:34:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.261 20:34:19 -- common/autotest_common.sh@852 -- # return 0 00:07:36.261 20:34:19 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:36.261 20:34:19 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:36.261 20:34:19 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:36.261 20:34:19 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:36.261 00:07:36.261 real 0m4.231s 00:07:36.261 user 0m1.479s 00:07:36.261 sys 0m0.208s 00:07:36.261 20:34:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.261 20:34:19 -- common/autotest_common.sh@10 -- # set +x 00:07:36.261 20:34:19 -- event/cpu_locks.sh@174 -- # cleanup 00:07:36.261 20:34:19 -- event/cpu_locks.sh@15 -- # [[ -z 41627 ]] 00:07:36.261 20:34:19 -- event/cpu_locks.sh@15 -- # killprocess 41627 00:07:36.261 20:34:19 -- common/autotest_common.sh@926 -- # '[' -z 41627 ']' 00:07:36.261 20:34:19 -- common/autotest_common.sh@930 -- # kill -0 41627 00:07:36.261 20:34:19 -- common/autotest_common.sh@931 -- # uname 00:07:36.261 20:34:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:36.261 20:34:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41627 00:07:36.519 20:34:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:36.519 20:34:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:36.519 killing process with pid 41627 00:07:36.519 20:34:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41627' 00:07:36.519 20:34:19 -- common/autotest_common.sh@945 -- # kill 41627 00:07:36.519 20:34:19 -- common/autotest_common.sh@950 -- # wait 41627 00:07:39.053 20:34:21 -- event/cpu_locks.sh@16 -- # [[ -z 41659 ]] 00:07:39.053 20:34:21 -- event/cpu_locks.sh@16 -- # killprocess 41659 00:07:39.053 20:34:21 -- common/autotest_common.sh@926 -- # '[' -z 41659 ']' 00:07:39.053 20:34:21 -- common/autotest_common.sh@930 -- # kill -0 41659 00:07:39.053 20:34:21 -- common/autotest_common.sh@931 -- # uname 00:07:39.053 20:34:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:39.053 20:34:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 41659 00:07:39.053 killing process with pid 41659 00:07:39.053 20:34:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:07:39.053 20:34:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:07:39.053 20:34:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 41659' 00:07:39.053 20:34:22 -- common/autotest_common.sh@945 -- # kill 41659 00:07:39.053 20:34:22 -- common/autotest_common.sh@950 -- # wait 41659 00:07:40.956 20:34:24 -- event/cpu_locks.sh@18 -- # rm -f 00:07:40.957 Process with pid 41627 is not found 00:07:40.957 Process with pid 41659 is not found 00:07:40.957 20:34:24 -- event/cpu_locks.sh@1 -- # cleanup 00:07:40.957 20:34:24 -- event/cpu_locks.sh@15 -- # [[ -z 41627 ]] 00:07:40.957 20:34:24 -- event/cpu_locks.sh@15 -- # killprocess 41627 00:07:40.957 20:34:24 -- common/autotest_common.sh@926 -- # '[' -z 41627 ']' 00:07:40.957 20:34:24 -- common/autotest_common.sh@930 -- # kill -0 41627 00:07:40.957 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (41627) - No such process 00:07:40.957 20:34:24 -- common/autotest_common.sh@953 -- # echo 'Process with pid 41627 is not found' 00:07:40.957 20:34:24 -- event/cpu_locks.sh@16 -- # [[ -z 41659 ]] 00:07:40.957 20:34:24 -- event/cpu_locks.sh@16 -- # killprocess 41659 00:07:40.957 20:34:24 -- common/autotest_common.sh@926 -- # '[' -z 41659 ']' 00:07:40.957 20:34:24 -- common/autotest_common.sh@930 -- # kill -0 41659 00:07:40.957 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (41659) - No such process 00:07:40.957 20:34:24 -- common/autotest_common.sh@953 -- # echo 'Process with pid 41659 is not found' 00:07:40.957 20:34:24 -- event/cpu_locks.sh@18 -- # rm -f 00:07:40.957 00:07:40.957 real 0m52.253s 00:07:40.957 user 1m25.885s 00:07:40.957 sys 0m9.651s 00:07:40.957 20:34:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.957 20:34:24 -- common/autotest_common.sh@10 -- # set +x 00:07:40.957 ************************************ 00:07:40.957 END TEST cpu_locks 00:07:40.957 ************************************ 00:07:40.957 ************************************ 00:07:40.957 END TEST event 00:07:40.957 ************************************ 00:07:40.957 00:07:40.957 real 1m4.507s 00:07:40.957 user 1m46.100s 00:07:40.957 sys 0m10.683s 00:07:40.957 20:34:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.957 20:34:24 -- common/autotest_common.sh@10 -- # set +x 00:07:40.957 20:34:24 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:40.957 20:34:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.957 20:34:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.957 20:34:24 -- common/autotest_common.sh@10 -- # set +x 00:07:40.957 ************************************ 00:07:40.957 START TEST thread 00:07:40.957 ************************************ 00:07:40.957 20:34:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:40.957 * Looking for test storage... 00:07:40.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:40.957 20:34:24 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:40.957 20:34:24 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:40.957 20:34:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.957 20:34:24 -- common/autotest_common.sh@10 -- # set +x 00:07:40.957 ************************************ 00:07:40.957 START TEST thread_poller_perf 00:07:40.957 ************************************ 00:07:40.957 20:34:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:40.957 [2024-04-15 20:34:24.448954] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:40.957 [2024-04-15 20:34:24.449107] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41857 ] 00:07:41.216 [2024-04-15 20:34:24.624237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.475 [2024-04-15 20:34:24.817144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.475 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:42.927 ====================================== 00:07:42.927 busy:2496254834 (cyc) 00:07:42.927 total_run_count: 1518000 00:07:42.927 tsc_hz: 2490000000 (cyc) 00:07:42.927 ====================================== 00:07:42.927 poller_cost: 1644 (cyc), 660 (nsec) 00:07:42.927 ************************************ 00:07:42.927 END TEST thread_poller_perf 00:07:42.927 ************************************ 00:07:42.927 00:07:42.927 real 0m1.771s 00:07:42.927 user 0m1.574s 00:07:42.927 sys 0m0.097s 00:07:42.927 20:34:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.927 20:34:26 -- common/autotest_common.sh@10 -- # set +x 00:07:42.927 20:34:26 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:42.927 20:34:26 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:42.927 20:34:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.927 20:34:26 -- common/autotest_common.sh@10 -- # set +x 00:07:42.927 ************************************ 00:07:42.927 START TEST thread_poller_perf 00:07:42.927 ************************************ 00:07:42.927 20:34:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:42.927 [2024-04-15 20:34:26.281896] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:42.927 [2024-04-15 20:34:26.282079] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41910 ] 00:07:43.186 [2024-04-15 20:34:26.463478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.186 [2024-04-15 20:34:26.659230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.186 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:44.565 ====================================== 00:07:44.565 busy:2495235692 (cyc) 00:07:44.565 total_run_count: 16256000 00:07:44.565 tsc_hz: 2490000000 (cyc) 00:07:44.565 ====================================== 00:07:44.565 poller_cost: 153 (cyc), 61 (nsec) 00:07:44.565 00:07:44.565 real 0m1.774s 00:07:44.565 user 0m1.570s 00:07:44.565 sys 0m0.102s 00:07:44.565 20:34:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.565 20:34:28 -- common/autotest_common.sh@10 -- # set +x 00:07:44.565 ************************************ 00:07:44.565 END TEST thread_poller_perf 00:07:44.565 ************************************ 00:07:44.824 20:34:28 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:07:44.824 20:34:28 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:07:44.824 20:34:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.824 20:34:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.824 20:34:28 -- common/autotest_common.sh@10 -- # set +x 00:07:44.824 ************************************ 00:07:44.824 START TEST thread_spdk_lock 00:07:44.824 ************************************ 00:07:44.824 20:34:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:07:44.824 [2024-04-15 20:34:28.123425] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:44.824 [2024-04-15 20:34:28.123717] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41953 ] 00:07:44.824 [2024-04-15 20:34:28.304288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:45.083 [2024-04-15 20:34:28.516292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.083 [2024-04-15 20:34:28.516296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.651 [2024-04-15 20:34:28.991686] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:45.651 [2024-04-15 20:34:28.991769] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:45.651 [2024-04-15 20:34:28.991802] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0xc31840 00:07:45.651 [2024-04-15 20:34:29.000648] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:45.651 [2024-04-15 20:34:29.000744] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:45.651 [2024-04-15 20:34:29.000775] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:07:45.910 Starting test contend 00:07:45.910 Worker Delay Wait us Hold us Total us 00:07:45.910 0 3 185347 177010 362358 00:07:45.910 1 5 105442 277822 383265 00:07:45.910 PASS test contend 00:07:45.910 Starting test hold_by_poller 00:07:45.910 PASS test hold_by_poller 00:07:45.910 Starting test hold_by_message 00:07:45.910 PASS test hold_by_message 00:07:45.910 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:07:45.910 100014 assertions passed 00:07:45.910 0 assertions failed 00:07:45.910 ************************************ 00:07:45.910 END TEST thread_spdk_lock 00:07:45.910 ************************************ 00:07:45.910 00:07:45.910 real 0m1.278s 00:07:45.910 user 0m1.556s 00:07:45.910 sys 0m0.107s 00:07:45.910 20:34:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.910 20:34:29 -- common/autotest_common.sh@10 -- # set +x 00:07:46.169 00:07:46.169 real 0m5.142s 00:07:46.169 user 0m4.827s 00:07:46.169 sys 0m0.504s 00:07:46.169 20:34:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.169 20:34:29 -- common/autotest_common.sh@10 -- # set +x 00:07:46.169 ************************************ 00:07:46.169 END TEST thread 00:07:46.169 ************************************ 00:07:46.169 20:34:29 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:46.169 20:34:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:46.169 20:34:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.169 20:34:29 -- common/autotest_common.sh@10 -- # set +x 00:07:46.169 ************************************ 00:07:46.169 START TEST accel 00:07:46.169 ************************************ 00:07:46.169 20:34:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:46.169 * Looking for test storage... 00:07:46.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:46.169 20:34:29 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:46.169 20:34:29 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:46.169 20:34:29 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:46.169 20:34:29 -- accel/accel.sh@59 -- # spdk_tgt_pid=42045 00:07:46.169 20:34:29 -- accel/accel.sh@60 -- # waitforlisten 42045 00:07:46.169 20:34:29 -- common/autotest_common.sh@819 -- # '[' -z 42045 ']' 00:07:46.169 20:34:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.169 20:34:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:46.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.169 20:34:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.169 20:34:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:46.169 20:34:29 -- common/autotest_common.sh@10 -- # set +x 00:07:46.170 20:34:29 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:46.170 20:34:29 -- accel/accel.sh@58 -- # build_accel_config 00:07:46.170 20:34:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.170 20:34:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.170 20:34:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.170 20:34:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.170 20:34:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.170 20:34:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.170 20:34:29 -- accel/accel.sh@42 -- # jq -r . 00:07:46.429 [2024-04-15 20:34:29.742006] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:46.429 [2024-04-15 20:34:29.742154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42045 ] 00:07:46.429 [2024-04-15 20:34:29.912311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.687 [2024-04-15 20:34:30.088158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:46.687 [2024-04-15 20:34:30.088354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.625 20:34:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:47.625 20:34:31 -- common/autotest_common.sh@852 -- # return 0 00:07:47.625 20:34:31 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:47.625 20:34:31 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:47.625 20:34:31 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:47.625 20:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:47.625 20:34:31 -- common/autotest_common.sh@10 -- # set +x 00:07:47.625 20:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.936 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.936 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.936 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.936 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.936 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.936 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.936 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.936 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.936 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.936 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.936 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.936 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.937 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.937 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.937 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.937 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.937 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.937 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.937 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.937 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.937 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.937 20:34:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:47.937 20:34:31 -- accel/accel.sh@64 -- # IFS== 00:07:47.937 20:34:31 -- accel/accel.sh@64 -- # read -r opc module 00:07:47.937 20:34:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:47.937 20:34:31 -- accel/accel.sh@67 -- # killprocess 42045 00:07:47.937 20:34:31 -- common/autotest_common.sh@926 -- # '[' -z 42045 ']' 00:07:47.937 20:34:31 -- common/autotest_common.sh@930 -- # kill -0 42045 00:07:47.937 20:34:31 -- common/autotest_common.sh@931 -- # uname 00:07:47.937 20:34:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:47.937 20:34:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 42045 00:07:47.937 killing process with pid 42045 00:07:47.937 20:34:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:47.937 20:34:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:47.937 20:34:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 42045' 00:07:47.937 20:34:31 -- common/autotest_common.sh@945 -- # kill 42045 00:07:47.937 20:34:31 -- common/autotest_common.sh@950 -- # wait 42045 00:07:49.841 20:34:33 -- accel/accel.sh@68 -- # trap - ERR 00:07:49.841 20:34:33 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:49.841 20:34:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:49.841 20:34:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.841 20:34:33 -- common/autotest_common.sh@10 -- # set +x 00:07:49.841 20:34:33 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:07:49.841 20:34:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:49.841 20:34:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.841 20:34:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.841 20:34:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.841 20:34:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.841 20:34:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.841 20:34:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.841 20:34:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.841 20:34:33 -- accel/accel.sh@42 -- # jq -r . 00:07:50.100 20:34:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.100 20:34:33 -- common/autotest_common.sh@10 -- # set +x 00:07:50.100 20:34:33 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:50.100 20:34:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:50.100 20:34:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.100 20:34:33 -- common/autotest_common.sh@10 -- # set +x 00:07:50.100 ************************************ 00:07:50.100 START TEST accel_missing_filename 00:07:50.100 ************************************ 00:07:50.100 20:34:33 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:07:50.100 20:34:33 -- common/autotest_common.sh@640 -- # local es=0 00:07:50.100 20:34:33 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:50.100 20:34:33 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:50.100 20:34:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:50.100 20:34:33 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:50.100 20:34:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:50.100 20:34:33 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:07:50.100 20:34:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:50.100 20:34:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.100 20:34:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.100 20:34:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.100 20:34:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.100 20:34:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.100 20:34:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.100 20:34:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.100 20:34:33 -- accel/accel.sh@42 -- # jq -r . 00:07:50.360 [2024-04-15 20:34:33.656135] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:50.360 [2024-04-15 20:34:33.656295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42139 ] 00:07:50.360 [2024-04-15 20:34:33.832279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.619 [2024-04-15 20:34:34.030958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.877 [2024-04-15 20:34:34.239123] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.445 [2024-04-15 20:34:34.740530] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:51.743 A filename is required. 00:07:51.743 20:34:35 -- common/autotest_common.sh@643 -- # es=234 00:07:51.743 20:34:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:51.743 ************************************ 00:07:51.743 END TEST accel_missing_filename 00:07:51.743 ************************************ 00:07:51.743 20:34:35 -- common/autotest_common.sh@652 -- # es=106 00:07:51.743 20:34:35 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:51.743 20:34:35 -- common/autotest_common.sh@660 -- # es=1 00:07:51.743 20:34:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:51.743 00:07:51.743 real 0m1.590s 00:07:51.743 user 0m1.268s 00:07:51.743 sys 0m0.176s 00:07:51.743 20:34:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.743 20:34:35 -- common/autotest_common.sh@10 -- # set +x 00:07:51.743 20:34:35 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.743 20:34:35 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:51.743 20:34:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.743 20:34:35 -- common/autotest_common.sh@10 -- # set +x 00:07:51.743 ************************************ 00:07:51.743 START TEST accel_compress_verify 00:07:51.743 ************************************ 00:07:51.743 20:34:35 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.743 20:34:35 -- common/autotest_common.sh@640 -- # local es=0 00:07:51.743 20:34:35 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.743 20:34:35 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:51.743 20:34:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:51.743 20:34:35 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:51.743 20:34:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:51.743 20:34:35 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.743 20:34:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:51.743 20:34:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.743 20:34:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.743 20:34:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.743 20:34:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.743 20:34:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.743 20:34:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.743 20:34:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.743 20:34:35 -- accel/accel.sh@42 -- # jq -r . 00:07:52.014 [2024-04-15 20:34:35.293282] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:52.014 [2024-04-15 20:34:35.293436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42187 ] 00:07:52.014 [2024-04-15 20:34:35.444596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.273 [2024-04-15 20:34:35.649044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.532 [2024-04-15 20:34:35.838588] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.100 [2024-04-15 20:34:36.348835] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:53.360 00:07:53.360 Compression does not support the verify option, aborting. 00:07:53.360 20:34:36 -- common/autotest_common.sh@643 -- # es=161 00:07:53.360 20:34:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:53.360 20:34:36 -- common/autotest_common.sh@652 -- # es=33 00:07:53.360 20:34:36 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:53.360 20:34:36 -- common/autotest_common.sh@660 -- # es=1 00:07:53.360 20:34:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:53.360 00:07:53.360 real 0m1.565s 00:07:53.360 user 0m1.261s 00:07:53.360 sys 0m0.158s 00:07:53.360 ************************************ 00:07:53.360 END TEST accel_compress_verify 00:07:53.360 ************************************ 00:07:53.360 20:34:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.360 20:34:36 -- common/autotest_common.sh@10 -- # set +x 00:07:53.360 20:34:36 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:53.360 20:34:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:53.360 20:34:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.360 20:34:36 -- common/autotest_common.sh@10 -- # set +x 00:07:53.360 ************************************ 00:07:53.360 START TEST accel_wrong_workload 00:07:53.360 ************************************ 00:07:53.360 20:34:36 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:07:53.360 20:34:36 -- common/autotest_common.sh@640 -- # local es=0 00:07:53.360 20:34:36 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:53.360 20:34:36 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:53.360 20:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:53.360 20:34:36 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:53.360 20:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:53.360 20:34:36 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:07:53.360 20:34:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:53.360 20:34:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.360 20:34:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.360 20:34:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.360 20:34:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.360 20:34:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.360 20:34:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.360 20:34:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.360 20:34:36 -- accel/accel.sh@42 -- # jq -r . 00:07:53.620 Unsupported workload type: foobar 00:07:53.620 [2024-04-15 20:34:36.911564] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:53.620 accel_perf options: 00:07:53.620 [-h help message] 00:07:53.620 [-q queue depth per core] 00:07:53.620 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:53.620 [-T number of threads per core 00:07:53.620 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:53.620 [-t time in seconds] 00:07:53.620 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:53.620 [ dif_verify, , dif_generate, dif_generate_copy 00:07:53.620 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:53.620 [-l for compress/decompress workloads, name of uncompressed input file 00:07:53.620 [-S for crc32c workload, use this seed value (default 0) 00:07:53.620 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:53.620 [-f for fill workload, use this BYTE value (default 255) 00:07:53.620 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:53.620 [-y verify result if this switch is on] 00:07:53.620 [-a tasks to allocate per core (default: same value as -q)] 00:07:53.620 Can be used to spread operations across a wider range of memory. 00:07:53.620 20:34:36 -- common/autotest_common.sh@643 -- # es=1 00:07:53.620 20:34:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:53.620 20:34:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:53.620 20:34:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:53.620 00:07:53.620 real 0m0.159s 00:07:53.620 user 0m0.076s 00:07:53.620 sys 0m0.042s 00:07:53.620 20:34:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.620 20:34:36 -- common/autotest_common.sh@10 -- # set +x 00:07:53.620 ************************************ 00:07:53.620 END TEST accel_wrong_workload 00:07:53.620 ************************************ 00:07:53.620 20:34:36 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:53.620 20:34:36 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:53.620 20:34:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.620 20:34:36 -- common/autotest_common.sh@10 -- # set +x 00:07:53.620 ************************************ 00:07:53.620 START TEST accel_negative_buffers 00:07:53.620 ************************************ 00:07:53.620 20:34:36 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:53.620 20:34:36 -- common/autotest_common.sh@640 -- # local es=0 00:07:53.620 20:34:36 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:53.620 20:34:36 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:53.620 20:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:53.620 20:34:36 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:53.620 20:34:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:53.620 20:34:36 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:07:53.620 20:34:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:53.620 20:34:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.620 20:34:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.620 20:34:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.620 20:34:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.620 20:34:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.620 20:34:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.620 20:34:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.620 20:34:36 -- accel/accel.sh@42 -- # jq -r . 00:07:53.880 -x option must be non-negative. 00:07:53.880 [2024-04-15 20:34:37.128483] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:53.880 accel_perf options: 00:07:53.880 [-h help message] 00:07:53.880 [-q queue depth per core] 00:07:53.880 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:53.880 [-T number of threads per core 00:07:53.880 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:53.880 [-t time in seconds] 00:07:53.880 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:53.880 [ dif_verify, , dif_generate, dif_generate_copy 00:07:53.880 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:53.880 [-l for compress/decompress workloads, name of uncompressed input file 00:07:53.880 [-S for crc32c workload, use this seed value (default 0) 00:07:53.880 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:53.880 [-f for fill workload, use this BYTE value (default 255) 00:07:53.880 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:53.880 [-y verify result if this switch is on] 00:07:53.880 [-a tasks to allocate per core (default: same value as -q)] 00:07:53.880 Can be used to spread operations across a wider range of memory. 00:07:53.880 20:34:37 -- common/autotest_common.sh@643 -- # es=1 00:07:53.880 20:34:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:53.880 20:34:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:53.880 20:34:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:53.880 00:07:53.880 real 0m0.165s 00:07:53.880 user 0m0.087s 00:07:53.880 sys 0m0.040s 00:07:53.880 20:34:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.880 ************************************ 00:07:53.880 END TEST accel_negative_buffers 00:07:53.880 ************************************ 00:07:53.880 20:34:37 -- common/autotest_common.sh@10 -- # set +x 00:07:53.880 20:34:37 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:53.880 20:34:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:53.880 20:34:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.880 20:34:37 -- common/autotest_common.sh@10 -- # set +x 00:07:53.880 ************************************ 00:07:53.880 START TEST accel_crc32c 00:07:53.880 ************************************ 00:07:53.880 20:34:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:53.880 20:34:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:53.880 20:34:37 -- accel/accel.sh@17 -- # local accel_module 00:07:53.880 20:34:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:53.880 20:34:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:53.880 20:34:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.880 20:34:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.880 20:34:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.880 20:34:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.880 20:34:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.880 20:34:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.880 20:34:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.880 20:34:37 -- accel/accel.sh@42 -- # jq -r . 00:07:53.880 [2024-04-15 20:34:37.355471] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:53.880 [2024-04-15 20:34:37.355633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42290 ] 00:07:54.139 [2024-04-15 20:34:37.507399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.399 [2024-04-15 20:34:37.711915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.936 20:34:39 -- accel/accel.sh@18 -- # out=' 00:07:56.936 SPDK Configuration: 00:07:56.936 Core mask: 0x1 00:07:56.936 00:07:56.936 Accel Perf Configuration: 00:07:56.936 Workload Type: crc32c 00:07:56.936 CRC-32C seed: 32 00:07:56.936 Transfer size: 4096 bytes 00:07:56.936 Vector count 1 00:07:56.936 Module: software 00:07:56.936 Queue depth: 32 00:07:56.936 Allocate depth: 32 00:07:56.936 # threads/core: 1 00:07:56.936 Run time: 1 seconds 00:07:56.936 Verify: Yes 00:07:56.936 00:07:56.936 Running for 1 seconds... 00:07:56.936 00:07:56.936 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:56.936 ------------------------------------------------------------------------------------ 00:07:56.936 0,0 126592/s 494 MiB/s 0 0 00:07:56.936 ==================================================================================== 00:07:56.936 Total 126592/s 494 MiB/s 0 0' 00:07:56.936 20:34:39 -- accel/accel.sh@20 -- # IFS=: 00:07:56.936 20:34:39 -- accel/accel.sh@20 -- # read -r var val 00:07:56.936 20:34:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:56.936 20:34:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:56.936 20:34:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.936 20:34:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.936 20:34:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.936 20:34:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.936 20:34:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.936 20:34:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.936 20:34:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.936 20:34:39 -- accel/accel.sh@42 -- # jq -r . 00:07:56.936 [2024-04-15 20:34:39.960176] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:56.936 [2024-04-15 20:34:39.960335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42324 ] 00:07:56.936 [2024-04-15 20:34:40.131490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.936 [2024-04-15 20:34:40.334917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val= 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val= 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val=0x1 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val= 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val= 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val=crc32c 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val=32 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val= 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val=software 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val=32 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val=32 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val=1 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val=Yes 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val= 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:57.195 20:34:40 -- accel/accel.sh@21 -- # val= 00:07:57.195 20:34:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # IFS=: 00:07:57.195 20:34:40 -- accel/accel.sh@20 -- # read -r var val 00:07:59.103 20:34:42 -- accel/accel.sh@21 -- # val= 00:07:59.103 20:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:59.103 20:34:42 -- accel/accel.sh@21 -- # val= 00:07:59.103 20:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:59.103 20:34:42 -- accel/accel.sh@21 -- # val= 00:07:59.103 20:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:59.103 20:34:42 -- accel/accel.sh@21 -- # val= 00:07:59.103 20:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:59.103 20:34:42 -- accel/accel.sh@21 -- # val= 00:07:59.103 20:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:59.103 20:34:42 -- accel/accel.sh@21 -- # val= 00:07:59.103 20:34:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # IFS=: 00:07:59.103 20:34:42 -- accel/accel.sh@20 -- # read -r var val 00:07:59.103 20:34:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.103 20:34:42 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:59.103 20:34:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.103 ************************************ 00:07:59.103 END TEST accel_crc32c 00:07:59.103 ************************************ 00:07:59.103 00:07:59.103 real 0m5.226s 00:07:59.103 user 0m4.578s 00:07:59.103 sys 0m0.353s 00:07:59.103 20:34:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.103 20:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:59.103 20:34:42 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:59.103 20:34:42 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:59.103 20:34:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.103 20:34:42 -- common/autotest_common.sh@10 -- # set +x 00:07:59.103 ************************************ 00:07:59.103 START TEST accel_crc32c_C2 00:07:59.103 ************************************ 00:07:59.103 20:34:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:59.103 20:34:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.103 20:34:42 -- accel/accel.sh@17 -- # local accel_module 00:07:59.103 20:34:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:59.103 20:34:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:59.103 20:34:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.103 20:34:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.103 20:34:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.103 20:34:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.103 20:34:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.103 20:34:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.103 20:34:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.103 20:34:42 -- accel/accel.sh@42 -- # jq -r . 00:07:59.366 [2024-04-15 20:34:42.643638] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:59.366 [2024-04-15 20:34:42.643842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42383 ] 00:07:59.366 [2024-04-15 20:34:42.793850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.626 [2024-04-15 20:34:42.999434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.160 20:34:45 -- accel/accel.sh@18 -- # out=' 00:08:02.160 SPDK Configuration: 00:08:02.160 Core mask: 0x1 00:08:02.160 00:08:02.160 Accel Perf Configuration: 00:08:02.160 Workload Type: crc32c 00:08:02.160 CRC-32C seed: 0 00:08:02.160 Transfer size: 4096 bytes 00:08:02.160 Vector count 2 00:08:02.160 Module: software 00:08:02.160 Queue depth: 32 00:08:02.160 Allocate depth: 32 00:08:02.160 # threads/core: 1 00:08:02.160 Run time: 1 seconds 00:08:02.160 Verify: Yes 00:08:02.160 00:08:02.160 Running for 1 seconds... 00:08:02.160 00:08:02.160 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:02.160 ------------------------------------------------------------------------------------ 00:08:02.160 0,0 64416/s 503 MiB/s 0 0 00:08:02.160 ==================================================================================== 00:08:02.160 Total 64416/s 251 MiB/s 0 0' 00:08:02.160 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.160 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.160 20:34:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:02.160 20:34:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:02.160 20:34:45 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.160 20:34:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.160 20:34:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.160 20:34:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.160 20:34:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.160 20:34:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.160 20:34:45 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.160 20:34:45 -- accel/accel.sh@42 -- # jq -r . 00:08:02.160 [2024-04-15 20:34:45.241051] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:02.160 [2024-04-15 20:34:45.241216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42417 ] 00:08:02.160 [2024-04-15 20:34:45.413390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.160 [2024-04-15 20:34:45.620046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.418 20:34:45 -- accel/accel.sh@21 -- # val= 00:08:02.418 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.418 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.418 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.418 20:34:45 -- accel/accel.sh@21 -- # val= 00:08:02.418 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.418 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.418 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.418 20:34:45 -- accel/accel.sh@21 -- # val=0x1 00:08:02.418 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.418 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.418 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.418 20:34:45 -- accel/accel.sh@21 -- # val= 00:08:02.418 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.418 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.418 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val= 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val=crc32c 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val=0 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val= 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val=software 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@23 -- # accel_module=software 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val=32 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val=32 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val=1 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val=Yes 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val= 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:02.419 20:34:45 -- accel/accel.sh@21 -- # val= 00:08:02.419 20:34:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # IFS=: 00:08:02.419 20:34:45 -- accel/accel.sh@20 -- # read -r var val 00:08:04.322 20:34:47 -- accel/accel.sh@21 -- # val= 00:08:04.322 20:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # IFS=: 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # read -r var val 00:08:04.322 20:34:47 -- accel/accel.sh@21 -- # val= 00:08:04.322 20:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # IFS=: 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # read -r var val 00:08:04.322 20:34:47 -- accel/accel.sh@21 -- # val= 00:08:04.322 20:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # IFS=: 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # read -r var val 00:08:04.322 20:34:47 -- accel/accel.sh@21 -- # val= 00:08:04.322 20:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # IFS=: 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # read -r var val 00:08:04.322 20:34:47 -- accel/accel.sh@21 -- # val= 00:08:04.322 20:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # IFS=: 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # read -r var val 00:08:04.322 20:34:47 -- accel/accel.sh@21 -- # val= 00:08:04.322 20:34:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # IFS=: 00:08:04.322 20:34:47 -- accel/accel.sh@20 -- # read -r var val 00:08:04.322 ************************************ 00:08:04.322 END TEST accel_crc32c_C2 00:08:04.322 ************************************ 00:08:04.322 20:34:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:04.322 20:34:47 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:08:04.322 20:34:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.322 00:08:04.322 real 0m5.224s 00:08:04.322 user 0m4.576s 00:08:04.322 sys 0m0.356s 00:08:04.322 20:34:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.322 20:34:47 -- common/autotest_common.sh@10 -- # set +x 00:08:04.322 20:34:47 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:04.322 20:34:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:04.322 20:34:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:04.322 20:34:47 -- common/autotest_common.sh@10 -- # set +x 00:08:04.322 ************************************ 00:08:04.322 START TEST accel_copy 00:08:04.322 ************************************ 00:08:04.322 20:34:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:08:04.322 20:34:47 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.322 20:34:47 -- accel/accel.sh@17 -- # local accel_module 00:08:04.323 20:34:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:08:04.323 20:34:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:04.323 20:34:47 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.323 20:34:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.323 20:34:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.323 20:34:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.323 20:34:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.323 20:34:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.323 20:34:47 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.323 20:34:47 -- accel/accel.sh@42 -- # jq -r . 00:08:04.582 [2024-04-15 20:34:47.912200] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:04.582 [2024-04-15 20:34:47.912357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42481 ] 00:08:04.841 [2024-04-15 20:34:48.083445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.841 [2024-04-15 20:34:48.287870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.377 20:34:50 -- accel/accel.sh@18 -- # out=' 00:08:07.377 SPDK Configuration: 00:08:07.377 Core mask: 0x1 00:08:07.377 00:08:07.377 Accel Perf Configuration: 00:08:07.377 Workload Type: copy 00:08:07.377 Transfer size: 4096 bytes 00:08:07.377 Vector count 1 00:08:07.377 Module: software 00:08:07.377 Queue depth: 32 00:08:07.377 Allocate depth: 32 00:08:07.377 # threads/core: 1 00:08:07.377 Run time: 1 seconds 00:08:07.377 Verify: Yes 00:08:07.377 00:08:07.377 Running for 1 seconds... 00:08:07.377 00:08:07.377 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:07.377 ------------------------------------------------------------------------------------ 00:08:07.377 0,0 965600/s 3771 MiB/s 0 0 00:08:07.377 ==================================================================================== 00:08:07.377 Total 965600/s 3771 MiB/s 0 0' 00:08:07.377 20:34:50 -- accel/accel.sh@20 -- # IFS=: 00:08:07.377 20:34:50 -- accel/accel.sh@20 -- # read -r var val 00:08:07.377 20:34:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:07.377 20:34:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:07.377 20:34:50 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.377 20:34:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.377 20:34:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.377 20:34:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.377 20:34:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.377 20:34:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.377 20:34:50 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.377 20:34:50 -- accel/accel.sh@42 -- # jq -r . 00:08:07.377 [2024-04-15 20:34:50.506586] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:07.377 [2024-04-15 20:34:50.506983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42515 ] 00:08:07.377 [2024-04-15 20:34:50.662016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.377 [2024-04-15 20:34:50.864680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val= 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val= 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val=0x1 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val= 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val= 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val=copy 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@24 -- # accel_opc=copy 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val= 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val=software 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@23 -- # accel_module=software 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val=32 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val=32 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val=1 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val=Yes 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val= 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:07.636 20:34:51 -- accel/accel.sh@21 -- # val= 00:08:07.636 20:34:51 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # IFS=: 00:08:07.636 20:34:51 -- accel/accel.sh@20 -- # read -r var val 00:08:09.539 20:34:52 -- accel/accel.sh@21 -- # val= 00:08:09.539 20:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # IFS=: 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # read -r var val 00:08:09.539 20:34:52 -- accel/accel.sh@21 -- # val= 00:08:09.539 20:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # IFS=: 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # read -r var val 00:08:09.539 20:34:52 -- accel/accel.sh@21 -- # val= 00:08:09.539 20:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # IFS=: 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # read -r var val 00:08:09.539 20:34:52 -- accel/accel.sh@21 -- # val= 00:08:09.539 20:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # IFS=: 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # read -r var val 00:08:09.539 20:34:52 -- accel/accel.sh@21 -- # val= 00:08:09.539 20:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # IFS=: 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # read -r var val 00:08:09.539 20:34:52 -- accel/accel.sh@21 -- # val= 00:08:09.539 20:34:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # IFS=: 00:08:09.539 20:34:52 -- accel/accel.sh@20 -- # read -r var val 00:08:09.539 20:34:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:09.539 20:34:52 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:08:09.539 20:34:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.539 ************************************ 00:08:09.539 END TEST accel_copy 00:08:09.539 ************************************ 00:08:09.539 00:08:09.539 real 0m5.228s 00:08:09.539 user 0m4.595s 00:08:09.539 sys 0m0.336s 00:08:09.539 20:34:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.539 20:34:52 -- common/autotest_common.sh@10 -- # set +x 00:08:09.539 20:34:53 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:09.539 20:34:53 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:09.539 20:34:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.539 20:34:53 -- common/autotest_common.sh@10 -- # set +x 00:08:09.797 ************************************ 00:08:09.797 START TEST accel_fill 00:08:09.797 ************************************ 00:08:09.797 20:34:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:09.797 20:34:53 -- accel/accel.sh@16 -- # local accel_opc 00:08:09.797 20:34:53 -- accel/accel.sh@17 -- # local accel_module 00:08:09.797 20:34:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:09.797 20:34:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:09.797 20:34:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.797 20:34:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.797 20:34:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.797 20:34:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.797 20:34:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.797 20:34:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.797 20:34:53 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.797 20:34:53 -- accel/accel.sh@42 -- # jq -r . 00:08:09.797 [2024-04-15 20:34:53.189544] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:09.797 [2024-04-15 20:34:53.189842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42569 ] 00:08:10.093 [2024-04-15 20:34:53.338259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.093 [2024-04-15 20:34:53.541910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.629 20:34:55 -- accel/accel.sh@18 -- # out=' 00:08:12.630 SPDK Configuration: 00:08:12.630 Core mask: 0x1 00:08:12.630 00:08:12.630 Accel Perf Configuration: 00:08:12.630 Workload Type: fill 00:08:12.630 Fill pattern: 0x80 00:08:12.630 Transfer size: 4096 bytes 00:08:12.630 Vector count 1 00:08:12.630 Module: software 00:08:12.630 Queue depth: 64 00:08:12.630 Allocate depth: 64 00:08:12.630 # threads/core: 1 00:08:12.630 Run time: 1 seconds 00:08:12.630 Verify: Yes 00:08:12.630 00:08:12.630 Running for 1 seconds... 00:08:12.630 00:08:12.630 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:12.630 ------------------------------------------------------------------------------------ 00:08:12.630 0,0 1436224/s 5610 MiB/s 0 0 00:08:12.630 ==================================================================================== 00:08:12.630 Total 1436224/s 5610 MiB/s 0 0' 00:08:12.630 20:34:55 -- accel/accel.sh@20 -- # IFS=: 00:08:12.630 20:34:55 -- accel/accel.sh@20 -- # read -r var val 00:08:12.630 20:34:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:12.630 20:34:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:12.630 20:34:55 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.630 20:34:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.630 20:34:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.630 20:34:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.630 20:34:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.630 20:34:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.630 20:34:55 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.630 20:34:55 -- accel/accel.sh@42 -- # jq -r . 00:08:12.630 [2024-04-15 20:34:55.774951] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:12.630 [2024-04-15 20:34:55.775134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42615 ] 00:08:12.630 [2024-04-15 20:34:55.923715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.630 [2024-04-15 20:34:56.125233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val= 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val= 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val=0x1 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val= 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val= 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val=fill 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@24 -- # accel_opc=fill 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val=0x80 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val= 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val=software 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@23 -- # accel_module=software 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val=64 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val=64 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val=1 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val=Yes 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val= 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:12.889 20:34:56 -- accel/accel.sh@21 -- # val= 00:08:12.889 20:34:56 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # IFS=: 00:08:12.889 20:34:56 -- accel/accel.sh@20 -- # read -r var val 00:08:14.792 20:34:58 -- accel/accel.sh@21 -- # val= 00:08:14.792 20:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # IFS=: 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # read -r var val 00:08:14.792 20:34:58 -- accel/accel.sh@21 -- # val= 00:08:14.792 20:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # IFS=: 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # read -r var val 00:08:14.792 20:34:58 -- accel/accel.sh@21 -- # val= 00:08:14.792 20:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # IFS=: 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # read -r var val 00:08:14.792 20:34:58 -- accel/accel.sh@21 -- # val= 00:08:14.792 20:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # IFS=: 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # read -r var val 00:08:14.792 20:34:58 -- accel/accel.sh@21 -- # val= 00:08:14.792 20:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # IFS=: 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # read -r var val 00:08:14.792 20:34:58 -- accel/accel.sh@21 -- # val= 00:08:14.792 20:34:58 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # IFS=: 00:08:14.792 20:34:58 -- accel/accel.sh@20 -- # read -r var val 00:08:14.792 20:34:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:14.792 20:34:58 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:08:14.792 20:34:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.792 ************************************ 00:08:14.792 END TEST accel_fill 00:08:14.792 ************************************ 00:08:14.792 00:08:14.792 real 0m5.183s 00:08:14.792 user 0m4.544s 00:08:14.792 sys 0m0.341s 00:08:14.792 20:34:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.792 20:34:58 -- common/autotest_common.sh@10 -- # set +x 00:08:14.792 20:34:58 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:14.792 20:34:58 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:14.792 20:34:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.792 20:34:58 -- common/autotest_common.sh@10 -- # set +x 00:08:14.792 ************************************ 00:08:14.792 START TEST accel_copy_crc32c 00:08:14.792 ************************************ 00:08:14.792 20:34:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:08:14.792 20:34:58 -- accel/accel.sh@16 -- # local accel_opc 00:08:14.792 20:34:58 -- accel/accel.sh@17 -- # local accel_module 00:08:15.052 20:34:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:15.052 20:34:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:15.052 20:34:58 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.052 20:34:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:15.052 20:34:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.052 20:34:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.052 20:34:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:15.052 20:34:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:15.052 20:34:58 -- accel/accel.sh@41 -- # local IFS=, 00:08:15.052 20:34:58 -- accel/accel.sh@42 -- # jq -r . 00:08:15.052 [2024-04-15 20:34:58.430924] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:15.052 [2024-04-15 20:34:58.431080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42667 ] 00:08:15.312 [2024-04-15 20:34:58.586612] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.312 [2024-04-15 20:34:58.798019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.845 20:35:00 -- accel/accel.sh@18 -- # out=' 00:08:17.845 SPDK Configuration: 00:08:17.845 Core mask: 0x1 00:08:17.845 00:08:17.845 Accel Perf Configuration: 00:08:17.845 Workload Type: copy_crc32c 00:08:17.845 CRC-32C seed: 0 00:08:17.845 Vector size: 4096 bytes 00:08:17.845 Transfer size: 4096 bytes 00:08:17.845 Vector count 1 00:08:17.845 Module: software 00:08:17.845 Queue depth: 32 00:08:17.845 Allocate depth: 32 00:08:17.845 # threads/core: 1 00:08:17.845 Run time: 1 seconds 00:08:17.845 Verify: Yes 00:08:17.845 00:08:17.845 Running for 1 seconds... 00:08:17.845 00:08:17.845 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:17.845 ------------------------------------------------------------------------------------ 00:08:17.845 0,0 114560/s 447 MiB/s 0 0 00:08:17.845 ==================================================================================== 00:08:17.845 Total 114560/s 447 MiB/s 0 0' 00:08:17.845 20:35:00 -- accel/accel.sh@20 -- # IFS=: 00:08:17.845 20:35:00 -- accel/accel.sh@20 -- # read -r var val 00:08:17.846 20:35:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:17.846 20:35:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:17.846 20:35:00 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.846 20:35:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:17.846 20:35:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.846 20:35:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.846 20:35:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:17.846 20:35:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:17.846 20:35:00 -- accel/accel.sh@41 -- # local IFS=, 00:08:17.846 20:35:00 -- accel/accel.sh@42 -- # jq -r . 00:08:17.846 [2024-04-15 20:35:01.060852] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:17.846 [2024-04-15 20:35:01.061019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42713 ] 00:08:17.846 [2024-04-15 20:35:01.226450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.105 [2024-04-15 20:35:01.437296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.364 20:35:01 -- accel/accel.sh@21 -- # val= 00:08:18.364 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.364 20:35:01 -- accel/accel.sh@21 -- # val= 00:08:18.364 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.364 20:35:01 -- accel/accel.sh@21 -- # val=0x1 00:08:18.364 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.364 20:35:01 -- accel/accel.sh@21 -- # val= 00:08:18.364 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.364 20:35:01 -- accel/accel.sh@21 -- # val= 00:08:18.364 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.364 20:35:01 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:18.364 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.364 20:35:01 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.364 20:35:01 -- accel/accel.sh@21 -- # val=0 00:08:18.364 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.364 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.364 20:35:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:18.364 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.365 20:35:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:18.365 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.365 20:35:01 -- accel/accel.sh@21 -- # val= 00:08:18.365 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.365 20:35:01 -- accel/accel.sh@21 -- # val=software 00:08:18.365 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@23 -- # accel_module=software 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.365 20:35:01 -- accel/accel.sh@21 -- # val=32 00:08:18.365 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.365 20:35:01 -- accel/accel.sh@21 -- # val=32 00:08:18.365 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.365 20:35:01 -- accel/accel.sh@21 -- # val=1 00:08:18.365 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.365 20:35:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:18.365 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.365 20:35:01 -- accel/accel.sh@21 -- # val=Yes 00:08:18.365 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.365 20:35:01 -- accel/accel.sh@21 -- # val= 00:08:18.365 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:18.365 20:35:01 -- accel/accel.sh@21 -- # val= 00:08:18.365 20:35:01 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # IFS=: 00:08:18.365 20:35:01 -- accel/accel.sh@20 -- # read -r var val 00:08:20.270 20:35:03 -- accel/accel.sh@21 -- # val= 00:08:20.270 20:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # IFS=: 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # read -r var val 00:08:20.270 20:35:03 -- accel/accel.sh@21 -- # val= 00:08:20.270 20:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # IFS=: 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # read -r var val 00:08:20.270 20:35:03 -- accel/accel.sh@21 -- # val= 00:08:20.270 20:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # IFS=: 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # read -r var val 00:08:20.270 20:35:03 -- accel/accel.sh@21 -- # val= 00:08:20.270 20:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # IFS=: 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # read -r var val 00:08:20.270 20:35:03 -- accel/accel.sh@21 -- # val= 00:08:20.270 20:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # IFS=: 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # read -r var val 00:08:20.270 20:35:03 -- accel/accel.sh@21 -- # val= 00:08:20.270 20:35:03 -- accel/accel.sh@22 -- # case "$var" in 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # IFS=: 00:08:20.270 20:35:03 -- accel/accel.sh@20 -- # read -r var val 00:08:20.270 ************************************ 00:08:20.270 END TEST accel_copy_crc32c 00:08:20.270 ************************************ 00:08:20.270 20:35:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:20.270 20:35:03 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:20.270 20:35:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.270 00:08:20.270 real 0m5.280s 00:08:20.270 user 0m4.624s 00:08:20.270 sys 0m0.361s 00:08:20.270 20:35:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.270 20:35:03 -- common/autotest_common.sh@10 -- # set +x 00:08:20.270 20:35:03 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:20.270 20:35:03 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:20.270 20:35:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.270 20:35:03 -- common/autotest_common.sh@10 -- # set +x 00:08:20.270 ************************************ 00:08:20.270 START TEST accel_copy_crc32c_C2 00:08:20.270 ************************************ 00:08:20.270 20:35:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:20.270 20:35:03 -- accel/accel.sh@16 -- # local accel_opc 00:08:20.270 20:35:03 -- accel/accel.sh@17 -- # local accel_module 00:08:20.270 20:35:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:20.270 20:35:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:20.270 20:35:03 -- accel/accel.sh@12 -- # build_accel_config 00:08:20.270 20:35:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:20.270 20:35:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.270 20:35:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.270 20:35:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:20.270 20:35:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:20.270 20:35:03 -- accel/accel.sh@41 -- # local IFS=, 00:08:20.270 20:35:03 -- accel/accel.sh@42 -- # jq -r . 00:08:20.529 [2024-04-15 20:35:03.776847] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:20.529 [2024-04-15 20:35:03.777009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42760 ] 00:08:20.529 [2024-04-15 20:35:03.949121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.787 [2024-04-15 20:35:04.160091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.317 20:35:06 -- accel/accel.sh@18 -- # out=' 00:08:23.317 SPDK Configuration: 00:08:23.317 Core mask: 0x1 00:08:23.317 00:08:23.317 Accel Perf Configuration: 00:08:23.317 Workload Type: copy_crc32c 00:08:23.317 CRC-32C seed: 0 00:08:23.317 Vector size: 4096 bytes 00:08:23.317 Transfer size: 8192 bytes 00:08:23.317 Vector count 2 00:08:23.317 Module: software 00:08:23.317 Queue depth: 32 00:08:23.317 Allocate depth: 32 00:08:23.317 # threads/core: 1 00:08:23.317 Run time: 1 seconds 00:08:23.317 Verify: Yes 00:08:23.317 00:08:23.317 Running for 1 seconds... 00:08:23.317 00:08:23.317 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:23.317 ------------------------------------------------------------------------------------ 00:08:23.317 0,0 58976/s 460 MiB/s 0 0 00:08:23.317 ==================================================================================== 00:08:23.317 Total 58976/s 230 MiB/s 0 0' 00:08:23.317 20:35:06 -- accel/accel.sh@20 -- # IFS=: 00:08:23.317 20:35:06 -- accel/accel.sh@20 -- # read -r var val 00:08:23.317 20:35:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:23.317 20:35:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:23.317 20:35:06 -- accel/accel.sh@12 -- # build_accel_config 00:08:23.317 20:35:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:23.317 20:35:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.317 20:35:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.317 20:35:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:23.317 20:35:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:23.317 20:35:06 -- accel/accel.sh@41 -- # local IFS=, 00:08:23.317 20:35:06 -- accel/accel.sh@42 -- # jq -r . 00:08:23.317 [2024-04-15 20:35:06.434557] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:23.317 [2024-04-15 20:35:06.434961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42806 ] 00:08:23.317 [2024-04-15 20:35:06.594343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.317 [2024-04-15 20:35:06.809087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.577 20:35:07 -- accel/accel.sh@21 -- # val= 00:08:23.577 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.577 20:35:07 -- accel/accel.sh@21 -- # val= 00:08:23.577 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.577 20:35:07 -- accel/accel.sh@21 -- # val=0x1 00:08:23.577 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.577 20:35:07 -- accel/accel.sh@21 -- # val= 00:08:23.577 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.577 20:35:07 -- accel/accel.sh@21 -- # val= 00:08:23.577 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.577 20:35:07 -- accel/accel.sh@21 -- # val=copy_crc32c 00:08:23.577 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.577 20:35:07 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.577 20:35:07 -- accel/accel.sh@21 -- # val=0 00:08:23.577 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.577 20:35:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:23.577 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.577 20:35:07 -- accel/accel.sh@21 -- # val='8192 bytes' 00:08:23.577 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.577 20:35:07 -- accel/accel.sh@21 -- # val= 00:08:23.577 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.577 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.578 20:35:07 -- accel/accel.sh@21 -- # val=software 00:08:23.578 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.578 20:35:07 -- accel/accel.sh@23 -- # accel_module=software 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.578 20:35:07 -- accel/accel.sh@21 -- # val=32 00:08:23.578 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.578 20:35:07 -- accel/accel.sh@21 -- # val=32 00:08:23.578 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.578 20:35:07 -- accel/accel.sh@21 -- # val=1 00:08:23.578 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.578 20:35:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:23.578 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.578 20:35:07 -- accel/accel.sh@21 -- # val=Yes 00:08:23.578 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.578 20:35:07 -- accel/accel.sh@21 -- # val= 00:08:23.578 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:23.578 20:35:07 -- accel/accel.sh@21 -- # val= 00:08:23.578 20:35:07 -- accel/accel.sh@22 -- # case "$var" in 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # IFS=: 00:08:23.578 20:35:07 -- accel/accel.sh@20 -- # read -r var val 00:08:25.488 20:35:08 -- accel/accel.sh@21 -- # val= 00:08:25.488 20:35:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # IFS=: 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # read -r var val 00:08:25.488 20:35:08 -- accel/accel.sh@21 -- # val= 00:08:25.488 20:35:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # IFS=: 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # read -r var val 00:08:25.488 20:35:08 -- accel/accel.sh@21 -- # val= 00:08:25.488 20:35:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # IFS=: 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # read -r var val 00:08:25.488 20:35:08 -- accel/accel.sh@21 -- # val= 00:08:25.488 20:35:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # IFS=: 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # read -r var val 00:08:25.488 20:35:08 -- accel/accel.sh@21 -- # val= 00:08:25.488 20:35:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # IFS=: 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # read -r var val 00:08:25.488 20:35:08 -- accel/accel.sh@21 -- # val= 00:08:25.488 20:35:08 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # IFS=: 00:08:25.488 20:35:08 -- accel/accel.sh@20 -- # read -r var val 00:08:25.488 ************************************ 00:08:25.488 END TEST accel_copy_crc32c_C2 00:08:25.488 ************************************ 00:08:25.488 20:35:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:25.488 20:35:08 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:08:25.488 20:35:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.488 00:08:25.488 real 0m5.299s 00:08:25.488 user 0m4.665s 00:08:25.488 sys 0m0.336s 00:08:25.488 20:35:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.488 20:35:08 -- common/autotest_common.sh@10 -- # set +x 00:08:25.488 20:35:08 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:25.488 20:35:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:25.488 20:35:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.488 20:35:08 -- common/autotest_common.sh@10 -- # set +x 00:08:25.747 ************************************ 00:08:25.747 START TEST accel_dualcast 00:08:25.747 ************************************ 00:08:25.747 20:35:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:08:25.747 20:35:08 -- accel/accel.sh@16 -- # local accel_opc 00:08:25.747 20:35:08 -- accel/accel.sh@17 -- # local accel_module 00:08:25.747 20:35:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:08:25.747 20:35:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:25.747 20:35:08 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.747 20:35:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:25.747 20:35:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.747 20:35:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.747 20:35:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:25.748 20:35:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:25.748 20:35:08 -- accel/accel.sh@41 -- # local IFS=, 00:08:25.748 20:35:08 -- accel/accel.sh@42 -- # jq -r . 00:08:25.748 [2024-04-15 20:35:09.117768] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:25.748 [2024-04-15 20:35:09.117921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42864 ] 00:08:26.008 [2024-04-15 20:35:09.271925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.008 [2024-04-15 20:35:09.483546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.545 20:35:11 -- accel/accel.sh@18 -- # out=' 00:08:28.545 SPDK Configuration: 00:08:28.545 Core mask: 0x1 00:08:28.545 00:08:28.545 Accel Perf Configuration: 00:08:28.545 Workload Type: dualcast 00:08:28.545 Transfer size: 4096 bytes 00:08:28.545 Vector count 1 00:08:28.545 Module: software 00:08:28.545 Queue depth: 32 00:08:28.545 Allocate depth: 32 00:08:28.545 # threads/core: 1 00:08:28.545 Run time: 1 seconds 00:08:28.545 Verify: Yes 00:08:28.545 00:08:28.545 Running for 1 seconds... 00:08:28.545 00:08:28.545 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:28.545 ------------------------------------------------------------------------------------ 00:08:28.545 0,0 816896/s 3191 MiB/s 0 0 00:08:28.545 ==================================================================================== 00:08:28.545 Total 816896/s 3191 MiB/s 0 0' 00:08:28.545 20:35:11 -- accel/accel.sh@20 -- # IFS=: 00:08:28.545 20:35:11 -- accel/accel.sh@20 -- # read -r var val 00:08:28.545 20:35:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:28.545 20:35:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:28.545 20:35:11 -- accel/accel.sh@12 -- # build_accel_config 00:08:28.545 20:35:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:28.545 20:35:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.545 20:35:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.545 20:35:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:28.545 20:35:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:28.545 20:35:11 -- accel/accel.sh@41 -- # local IFS=, 00:08:28.545 20:35:11 -- accel/accel.sh@42 -- # jq -r . 00:08:28.545 [2024-04-15 20:35:11.718524] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:28.545 [2024-04-15 20:35:11.718853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42900 ] 00:08:28.545 [2024-04-15 20:35:11.890020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.805 [2024-04-15 20:35:12.089265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val= 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val= 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val=0x1 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val= 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val= 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val=dualcast 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val= 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val=software 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@23 -- # accel_module=software 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val=32 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val=32 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val=1 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val=Yes 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val= 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:28.805 20:35:12 -- accel/accel.sh@21 -- # val= 00:08:28.805 20:35:12 -- accel/accel.sh@22 -- # case "$var" in 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # IFS=: 00:08:28.805 20:35:12 -- accel/accel.sh@20 -- # read -r var val 00:08:30.733 20:35:14 -- accel/accel.sh@21 -- # val= 00:08:30.733 20:35:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.733 20:35:14 -- accel/accel.sh@20 -- # IFS=: 00:08:30.733 20:35:14 -- accel/accel.sh@20 -- # read -r var val 00:08:30.733 20:35:14 -- accel/accel.sh@21 -- # val= 00:08:30.733 20:35:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.733 20:35:14 -- accel/accel.sh@20 -- # IFS=: 00:08:30.733 20:35:14 -- accel/accel.sh@20 -- # read -r var val 00:08:30.733 20:35:14 -- accel/accel.sh@21 -- # val= 00:08:30.733 20:35:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.733 20:35:14 -- accel/accel.sh@20 -- # IFS=: 00:08:30.733 20:35:14 -- accel/accel.sh@20 -- # read -r var val 00:08:30.733 20:35:14 -- accel/accel.sh@21 -- # val= 00:08:30.733 20:35:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.733 20:35:14 -- accel/accel.sh@20 -- # IFS=: 00:08:30.733 20:35:14 -- accel/accel.sh@20 -- # read -r var val 00:08:30.733 20:35:14 -- accel/accel.sh@21 -- # val= 00:08:30.733 20:35:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.734 20:35:14 -- accel/accel.sh@20 -- # IFS=: 00:08:30.734 20:35:14 -- accel/accel.sh@20 -- # read -r var val 00:08:30.734 20:35:14 -- accel/accel.sh@21 -- # val= 00:08:30.734 20:35:14 -- accel/accel.sh@22 -- # case "$var" in 00:08:30.734 20:35:14 -- accel/accel.sh@20 -- # IFS=: 00:08:30.734 20:35:14 -- accel/accel.sh@20 -- # read -r var val 00:08:30.734 ************************************ 00:08:30.734 END TEST accel_dualcast 00:08:30.734 ************************************ 00:08:30.734 20:35:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:30.734 20:35:14 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:08:30.734 20:35:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.734 00:08:30.734 real 0m5.189s 00:08:30.734 user 0m4.544s 00:08:30.734 sys 0m0.344s 00:08:30.734 20:35:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.734 20:35:14 -- common/autotest_common.sh@10 -- # set +x 00:08:30.734 20:35:14 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:30.734 20:35:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:30.734 20:35:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.734 20:35:14 -- common/autotest_common.sh@10 -- # set +x 00:08:30.734 ************************************ 00:08:30.734 START TEST accel_compare 00:08:30.734 ************************************ 00:08:30.734 20:35:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:08:30.734 20:35:14 -- accel/accel.sh@16 -- # local accel_opc 00:08:30.734 20:35:14 -- accel/accel.sh@17 -- # local accel_module 00:08:30.734 20:35:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:08:30.734 20:35:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:30.734 20:35:14 -- accel/accel.sh@12 -- # build_accel_config 00:08:30.734 20:35:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:30.734 20:35:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.734 20:35:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.734 20:35:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:30.734 20:35:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:30.734 20:35:14 -- accel/accel.sh@41 -- # local IFS=, 00:08:30.734 20:35:14 -- accel/accel.sh@42 -- # jq -r . 00:08:30.993 [2024-04-15 20:35:14.366812] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:30.993 [2024-04-15 20:35:14.366969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42957 ] 00:08:31.252 [2024-04-15 20:35:14.521534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.252 [2024-04-15 20:35:14.725798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.788 20:35:16 -- accel/accel.sh@18 -- # out=' 00:08:33.788 SPDK Configuration: 00:08:33.788 Core mask: 0x1 00:08:33.788 00:08:33.788 Accel Perf Configuration: 00:08:33.788 Workload Type: compare 00:08:33.788 Transfer size: 4096 bytes 00:08:33.788 Vector count 1 00:08:33.788 Module: software 00:08:33.788 Queue depth: 32 00:08:33.788 Allocate depth: 32 00:08:33.788 # threads/core: 1 00:08:33.788 Run time: 1 seconds 00:08:33.788 Verify: Yes 00:08:33.788 00:08:33.788 Running for 1 seconds... 00:08:33.788 00:08:33.789 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:33.789 ------------------------------------------------------------------------------------ 00:08:33.789 0,0 1552384/s 6064 MiB/s 0 0 00:08:33.789 ==================================================================================== 00:08:33.789 Total 1552384/s 6064 MiB/s 0 0' 00:08:33.789 20:35:16 -- accel/accel.sh@20 -- # IFS=: 00:08:33.789 20:35:16 -- accel/accel.sh@20 -- # read -r var val 00:08:33.789 20:35:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:33.789 20:35:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:33.789 20:35:16 -- accel/accel.sh@12 -- # build_accel_config 00:08:33.789 20:35:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:33.789 20:35:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.789 20:35:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.789 20:35:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:33.789 20:35:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:33.789 20:35:16 -- accel/accel.sh@41 -- # local IFS=, 00:08:33.789 20:35:16 -- accel/accel.sh@42 -- # jq -r . 00:08:33.789 [2024-04-15 20:35:16.990131] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:33.789 [2024-04-15 20:35:16.990303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42993 ] 00:08:33.789 [2024-04-15 20:35:17.145816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.047 [2024-04-15 20:35:17.349233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val= 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val= 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val=0x1 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val= 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val= 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val=compare 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@24 -- # accel_opc=compare 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val= 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val=software 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@23 -- # accel_module=software 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val=32 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val=32 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val=1 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val=Yes 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val= 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:34.307 20:35:17 -- accel/accel.sh@21 -- # val= 00:08:34.307 20:35:17 -- accel/accel.sh@22 -- # case "$var" in 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # IFS=: 00:08:34.307 20:35:17 -- accel/accel.sh@20 -- # read -r var val 00:08:36.210 20:35:19 -- accel/accel.sh@21 -- # val= 00:08:36.210 20:35:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # IFS=: 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # read -r var val 00:08:36.210 20:35:19 -- accel/accel.sh@21 -- # val= 00:08:36.210 20:35:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # IFS=: 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # read -r var val 00:08:36.210 20:35:19 -- accel/accel.sh@21 -- # val= 00:08:36.210 20:35:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # IFS=: 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # read -r var val 00:08:36.210 20:35:19 -- accel/accel.sh@21 -- # val= 00:08:36.210 20:35:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # IFS=: 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # read -r var val 00:08:36.210 20:35:19 -- accel/accel.sh@21 -- # val= 00:08:36.210 20:35:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # IFS=: 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # read -r var val 00:08:36.210 20:35:19 -- accel/accel.sh@21 -- # val= 00:08:36.210 20:35:19 -- accel/accel.sh@22 -- # case "$var" in 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # IFS=: 00:08:36.210 20:35:19 -- accel/accel.sh@20 -- # read -r var val 00:08:36.210 ************************************ 00:08:36.210 END TEST accel_compare 00:08:36.210 ************************************ 00:08:36.210 20:35:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:36.210 20:35:19 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:08:36.210 20:35:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.210 00:08:36.210 real 0m5.230s 00:08:36.210 user 0m4.586s 00:08:36.210 sys 0m0.343s 00:08:36.210 20:35:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.210 20:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:36.210 20:35:19 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:36.210 20:35:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:36.210 20:35:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:36.210 20:35:19 -- common/autotest_common.sh@10 -- # set +x 00:08:36.210 ************************************ 00:08:36.210 START TEST accel_xor 00:08:36.210 ************************************ 00:08:36.210 20:35:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:08:36.210 20:35:19 -- accel/accel.sh@16 -- # local accel_opc 00:08:36.210 20:35:19 -- accel/accel.sh@17 -- # local accel_module 00:08:36.210 20:35:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:08:36.210 20:35:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:36.210 20:35:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:36.210 20:35:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:36.210 20:35:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.210 20:35:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.210 20:35:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:36.210 20:35:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:36.210 20:35:19 -- accel/accel.sh@41 -- # local IFS=, 00:08:36.210 20:35:19 -- accel/accel.sh@42 -- # jq -r . 00:08:36.210 [2024-04-15 20:35:19.663069] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:36.210 [2024-04-15 20:35:19.663231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43054 ] 00:08:36.469 [2024-04-15 20:35:19.855701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.728 [2024-04-15 20:35:20.066692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.263 20:35:22 -- accel/accel.sh@18 -- # out=' 00:08:39.263 SPDK Configuration: 00:08:39.263 Core mask: 0x1 00:08:39.263 00:08:39.263 Accel Perf Configuration: 00:08:39.263 Workload Type: xor 00:08:39.263 Source buffers: 2 00:08:39.263 Transfer size: 4096 bytes 00:08:39.263 Vector count 1 00:08:39.263 Module: software 00:08:39.263 Queue depth: 32 00:08:39.263 Allocate depth: 32 00:08:39.263 # threads/core: 1 00:08:39.263 Run time: 1 seconds 00:08:39.263 Verify: Yes 00:08:39.263 00:08:39.263 Running for 1 seconds... 00:08:39.263 00:08:39.263 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:39.263 ------------------------------------------------------------------------------------ 00:08:39.263 0,0 42400/s 165 MiB/s 0 0 00:08:39.263 ==================================================================================== 00:08:39.263 Total 42400/s 165 MiB/s 0 0' 00:08:39.263 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.263 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.263 20:35:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:39.263 20:35:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:39.263 20:35:22 -- accel/accel.sh@12 -- # build_accel_config 00:08:39.263 20:35:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:39.263 20:35:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.263 20:35:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.263 20:35:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:39.263 20:35:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:39.263 20:35:22 -- accel/accel.sh@41 -- # local IFS=, 00:08:39.263 20:35:22 -- accel/accel.sh@42 -- # jq -r . 00:08:39.263 [2024-04-15 20:35:22.319166] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:39.263 [2024-04-15 20:35:22.319325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43095 ] 00:08:39.263 [2024-04-15 20:35:22.468297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.263 [2024-04-15 20:35:22.679950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val= 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val= 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val=0x1 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val= 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val= 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val=xor 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val=2 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val= 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val=software 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@23 -- # accel_module=software 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val=32 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val=32 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val=1 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val=Yes 00:08:39.522 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.522 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.522 20:35:22 -- accel/accel.sh@21 -- # val= 00:08:39.523 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.523 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.523 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:39.523 20:35:22 -- accel/accel.sh@21 -- # val= 00:08:39.523 20:35:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:39.523 20:35:22 -- accel/accel.sh@20 -- # IFS=: 00:08:39.523 20:35:22 -- accel/accel.sh@20 -- # read -r var val 00:08:41.430 20:35:24 -- accel/accel.sh@21 -- # val= 00:08:41.430 20:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.430 20:35:24 -- accel/accel.sh@20 -- # IFS=: 00:08:41.430 20:35:24 -- accel/accel.sh@20 -- # read -r var val 00:08:41.430 20:35:24 -- accel/accel.sh@21 -- # val= 00:08:41.430 20:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.430 20:35:24 -- accel/accel.sh@20 -- # IFS=: 00:08:41.430 20:35:24 -- accel/accel.sh@20 -- # read -r var val 00:08:41.431 20:35:24 -- accel/accel.sh@21 -- # val= 00:08:41.431 20:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.431 20:35:24 -- accel/accel.sh@20 -- # IFS=: 00:08:41.431 20:35:24 -- accel/accel.sh@20 -- # read -r var val 00:08:41.431 20:35:24 -- accel/accel.sh@21 -- # val= 00:08:41.431 20:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.431 20:35:24 -- accel/accel.sh@20 -- # IFS=: 00:08:41.431 20:35:24 -- accel/accel.sh@20 -- # read -r var val 00:08:41.431 20:35:24 -- accel/accel.sh@21 -- # val= 00:08:41.431 20:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.431 20:35:24 -- accel/accel.sh@20 -- # IFS=: 00:08:41.431 20:35:24 -- accel/accel.sh@20 -- # read -r var val 00:08:41.431 20:35:24 -- accel/accel.sh@21 -- # val= 00:08:41.431 20:35:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:41.431 20:35:24 -- accel/accel.sh@20 -- # IFS=: 00:08:41.431 20:35:24 -- accel/accel.sh@20 -- # read -r var val 00:08:41.431 20:35:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:41.431 20:35:24 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:41.431 20:35:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:41.431 00:08:41.431 real 0m5.273s 00:08:41.431 user 0m4.613s 00:08:41.431 sys 0m0.366s 00:08:41.431 ************************************ 00:08:41.431 END TEST accel_xor 00:08:41.431 ************************************ 00:08:41.431 20:35:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.431 20:35:24 -- common/autotest_common.sh@10 -- # set +x 00:08:41.431 20:35:24 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:41.431 20:35:24 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:08:41.431 20:35:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.431 20:35:24 -- common/autotest_common.sh@10 -- # set +x 00:08:41.431 ************************************ 00:08:41.431 START TEST accel_xor 00:08:41.431 ************************************ 00:08:41.431 20:35:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:08:41.431 20:35:24 -- accel/accel.sh@16 -- # local accel_opc 00:08:41.431 20:35:24 -- accel/accel.sh@17 -- # local accel_module 00:08:41.431 20:35:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:08:41.431 20:35:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:41.431 20:35:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:41.431 20:35:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:41.431 20:35:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.431 20:35:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.431 20:35:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:41.431 20:35:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:41.431 20:35:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:41.431 20:35:24 -- accel/accel.sh@42 -- # jq -r . 00:08:41.690 [2024-04-15 20:35:25.003034] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:41.690 [2024-04-15 20:35:25.003216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43147 ] 00:08:41.690 [2024-04-15 20:35:25.158274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.949 [2024-04-15 20:35:25.362100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.484 20:35:27 -- accel/accel.sh@18 -- # out=' 00:08:44.484 SPDK Configuration: 00:08:44.484 Core mask: 0x1 00:08:44.484 00:08:44.484 Accel Perf Configuration: 00:08:44.484 Workload Type: xor 00:08:44.484 Source buffers: 3 00:08:44.484 Transfer size: 4096 bytes 00:08:44.484 Vector count 1 00:08:44.484 Module: software 00:08:44.484 Queue depth: 32 00:08:44.484 Allocate depth: 32 00:08:44.484 # threads/core: 1 00:08:44.484 Run time: 1 seconds 00:08:44.484 Verify: Yes 00:08:44.484 00:08:44.484 Running for 1 seconds... 00:08:44.484 00:08:44.484 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:44.484 ------------------------------------------------------------------------------------ 00:08:44.484 0,0 32000/s 125 MiB/s 0 0 00:08:44.484 ==================================================================================== 00:08:44.484 Total 32000/s 125 MiB/s 0 0' 00:08:44.484 20:35:27 -- accel/accel.sh@20 -- # IFS=: 00:08:44.484 20:35:27 -- accel/accel.sh@20 -- # read -r var val 00:08:44.484 20:35:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:44.484 20:35:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:44.484 20:35:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:44.484 20:35:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:44.484 20:35:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.484 20:35:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.484 20:35:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:44.484 20:35:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:44.484 20:35:27 -- accel/accel.sh@41 -- # local IFS=, 00:08:44.485 20:35:27 -- accel/accel.sh@42 -- # jq -r . 00:08:44.485 [2024-04-15 20:35:27.614770] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:44.485 [2024-04-15 20:35:27.614928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43190 ] 00:08:44.485 [2024-04-15 20:35:27.783010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.743 [2024-04-15 20:35:27.988558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val= 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val= 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val=0x1 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val= 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val= 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val=xor 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@24 -- # accel_opc=xor 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val=3 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val= 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val=software 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@23 -- # accel_module=software 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val=32 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val=32 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val=1 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val=Yes 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val= 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:44.743 20:35:28 -- accel/accel.sh@21 -- # val= 00:08:44.743 20:35:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # IFS=: 00:08:44.743 20:35:28 -- accel/accel.sh@20 -- # read -r var val 00:08:46.647 20:35:30 -- accel/accel.sh@21 -- # val= 00:08:46.647 20:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # IFS=: 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # read -r var val 00:08:46.647 20:35:30 -- accel/accel.sh@21 -- # val= 00:08:46.647 20:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # IFS=: 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # read -r var val 00:08:46.647 20:35:30 -- accel/accel.sh@21 -- # val= 00:08:46.647 20:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # IFS=: 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # read -r var val 00:08:46.647 20:35:30 -- accel/accel.sh@21 -- # val= 00:08:46.647 20:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # IFS=: 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # read -r var val 00:08:46.647 20:35:30 -- accel/accel.sh@21 -- # val= 00:08:46.647 20:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # IFS=: 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # read -r var val 00:08:46.647 20:35:30 -- accel/accel.sh@21 -- # val= 00:08:46.647 20:35:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # IFS=: 00:08:46.647 20:35:30 -- accel/accel.sh@20 -- # read -r var val 00:08:46.647 20:35:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:46.647 20:35:30 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:08:46.647 20:35:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:46.647 00:08:46.647 real 0m5.228s 00:08:46.647 user 0m4.583s 00:08:46.647 sys 0m0.343s 00:08:46.647 20:35:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.647 20:35:30 -- common/autotest_common.sh@10 -- # set +x 00:08:46.647 ************************************ 00:08:46.647 END TEST accel_xor 00:08:46.647 ************************************ 00:08:46.647 20:35:30 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:46.647 20:35:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:46.647 20:35:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.647 20:35:30 -- common/autotest_common.sh@10 -- # set +x 00:08:46.907 ************************************ 00:08:46.907 START TEST accel_dif_verify 00:08:46.907 ************************************ 00:08:46.907 20:35:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:08:46.907 20:35:30 -- accel/accel.sh@16 -- # local accel_opc 00:08:46.907 20:35:30 -- accel/accel.sh@17 -- # local accel_module 00:08:46.907 20:35:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:08:46.907 20:35:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:46.907 20:35:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:46.907 20:35:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:46.907 20:35:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:46.907 20:35:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:46.907 20:35:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:46.907 20:35:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:46.907 20:35:30 -- accel/accel.sh@41 -- # local IFS=, 00:08:46.907 20:35:30 -- accel/accel.sh@42 -- # jq -r . 00:08:46.907 [2024-04-15 20:35:30.291222] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:46.907 [2024-04-15 20:35:30.291375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43237 ] 00:08:47.166 [2024-04-15 20:35:30.446181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.166 [2024-04-15 20:35:30.657543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.700 20:35:32 -- accel/accel.sh@18 -- # out=' 00:08:49.700 SPDK Configuration: 00:08:49.700 Core mask: 0x1 00:08:49.700 00:08:49.700 Accel Perf Configuration: 00:08:49.700 Workload Type: dif_verify 00:08:49.700 Vector size: 4096 bytes 00:08:49.700 Transfer size: 4096 bytes 00:08:49.700 Block size: 512 bytes 00:08:49.700 Metadata size: 8 bytes 00:08:49.700 Vector count 1 00:08:49.700 Module: software 00:08:49.700 Queue depth: 32 00:08:49.700 Allocate depth: 32 00:08:49.700 # threads/core: 1 00:08:49.700 Run time: 1 seconds 00:08:49.700 Verify: No 00:08:49.700 00:08:49.700 Running for 1 seconds... 00:08:49.700 00:08:49.700 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:49.700 ------------------------------------------------------------------------------------ 00:08:49.700 0,0 54592/s 216 MiB/s 0 0 00:08:49.700 ==================================================================================== 00:08:49.700 Total 54592/s 213 MiB/s 0 0' 00:08:49.700 20:35:32 -- accel/accel.sh@20 -- # IFS=: 00:08:49.700 20:35:32 -- accel/accel.sh@20 -- # read -r var val 00:08:49.700 20:35:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:49.700 20:35:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:49.700 20:35:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:49.700 20:35:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:49.700 20:35:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.700 20:35:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.700 20:35:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:49.700 20:35:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:49.700 20:35:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:49.700 20:35:32 -- accel/accel.sh@42 -- # jq -r . 00:08:49.700 [2024-04-15 20:35:32.879705] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:49.701 [2024-04-15 20:35:32.879866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43283 ] 00:08:49.701 [2024-04-15 20:35:33.026096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.961 [2024-04-15 20:35:33.228051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val= 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val= 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val=0x1 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val= 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val= 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val=dif_verify 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val= 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val=software 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@23 -- # accel_module=software 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val=32 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val=32 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val=1 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val=No 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val= 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:49.961 20:35:33 -- accel/accel.sh@21 -- # val= 00:08:49.961 20:35:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # IFS=: 00:08:49.961 20:35:33 -- accel/accel.sh@20 -- # read -r var val 00:08:51.869 20:35:35 -- accel/accel.sh@21 -- # val= 00:08:51.869 20:35:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # IFS=: 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # read -r var val 00:08:51.869 20:35:35 -- accel/accel.sh@21 -- # val= 00:08:51.869 20:35:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # IFS=: 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # read -r var val 00:08:51.869 20:35:35 -- accel/accel.sh@21 -- # val= 00:08:51.869 20:35:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # IFS=: 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # read -r var val 00:08:51.869 20:35:35 -- accel/accel.sh@21 -- # val= 00:08:51.869 20:35:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # IFS=: 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # read -r var val 00:08:51.869 20:35:35 -- accel/accel.sh@21 -- # val= 00:08:51.869 20:35:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # IFS=: 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # read -r var val 00:08:51.869 20:35:35 -- accel/accel.sh@21 -- # val= 00:08:51.869 20:35:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # IFS=: 00:08:51.869 20:35:35 -- accel/accel.sh@20 -- # read -r var val 00:08:51.869 ************************************ 00:08:51.869 END TEST accel_dif_verify 00:08:51.869 ************************************ 00:08:51.869 20:35:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:51.869 20:35:35 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:08:51.869 20:35:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:51.869 00:08:51.869 real 0m5.181s 00:08:51.869 user 0m4.571s 00:08:51.869 sys 0m0.312s 00:08:51.869 20:35:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.869 20:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:52.128 20:35:35 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:52.128 20:35:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:52.128 20:35:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:52.128 20:35:35 -- common/autotest_common.sh@10 -- # set +x 00:08:52.128 ************************************ 00:08:52.128 START TEST accel_dif_generate 00:08:52.128 ************************************ 00:08:52.128 20:35:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:08:52.128 20:35:35 -- accel/accel.sh@16 -- # local accel_opc 00:08:52.128 20:35:35 -- accel/accel.sh@17 -- # local accel_module 00:08:52.128 20:35:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:08:52.128 20:35:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:52.128 20:35:35 -- accel/accel.sh@12 -- # build_accel_config 00:08:52.128 20:35:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:52.128 20:35:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:52.128 20:35:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:52.128 20:35:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:52.128 20:35:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:52.128 20:35:35 -- accel/accel.sh@41 -- # local IFS=, 00:08:52.128 20:35:35 -- accel/accel.sh@42 -- # jq -r . 00:08:52.128 [2024-04-15 20:35:35.521920] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:52.128 [2024-04-15 20:35:35.522078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43337 ] 00:08:52.386 [2024-04-15 20:35:35.691159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.644 [2024-04-15 20:35:35.902990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.575 20:35:38 -- accel/accel.sh@18 -- # out=' 00:08:54.575 SPDK Configuration: 00:08:54.575 Core mask: 0x1 00:08:54.575 00:08:54.575 Accel Perf Configuration: 00:08:54.575 Workload Type: dif_generate 00:08:54.575 Vector size: 4096 bytes 00:08:54.575 Transfer size: 4096 bytes 00:08:54.575 Block size: 512 bytes 00:08:54.575 Metadata size: 8 bytes 00:08:54.575 Vector count 1 00:08:54.575 Module: software 00:08:54.575 Queue depth: 32 00:08:54.575 Allocate depth: 32 00:08:54.575 # threads/core: 1 00:08:54.575 Run time: 1 seconds 00:08:54.575 Verify: No 00:08:54.575 00:08:54.575 Running for 1 seconds... 00:08:54.575 00:08:54.575 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:54.575 ------------------------------------------------------------------------------------ 00:08:54.575 0,0 57888/s 229 MiB/s 0 0 00:08:54.575 ==================================================================================== 00:08:54.575 Total 57888/s 226 MiB/s 0 0' 00:08:54.575 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:54.575 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:54.575 20:35:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:54.575 20:35:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:54.575 20:35:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:54.575 20:35:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:54.575 20:35:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.575 20:35:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.575 20:35:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:54.575 20:35:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:54.575 20:35:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:54.575 20:35:38 -- accel/accel.sh@42 -- # jq -r . 00:08:54.834 [2024-04-15 20:35:38.150174] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:54.834 [2024-04-15 20:35:38.150336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43380 ] 00:08:54.834 [2024-04-15 20:35:38.320002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.093 [2024-04-15 20:35:38.528662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val= 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val= 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val=0x1 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val= 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val= 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val=dif_generate 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val= 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val=software 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@23 -- # accel_module=software 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val=32 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val=32 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val=1 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val=No 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val= 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:55.352 20:35:38 -- accel/accel.sh@21 -- # val= 00:08:55.352 20:35:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # IFS=: 00:08:55.352 20:35:38 -- accel/accel.sh@20 -- # read -r var val 00:08:57.255 20:35:40 -- accel/accel.sh@21 -- # val= 00:08:57.255 20:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # IFS=: 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # read -r var val 00:08:57.255 20:35:40 -- accel/accel.sh@21 -- # val= 00:08:57.255 20:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # IFS=: 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # read -r var val 00:08:57.255 20:35:40 -- accel/accel.sh@21 -- # val= 00:08:57.255 20:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # IFS=: 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # read -r var val 00:08:57.255 20:35:40 -- accel/accel.sh@21 -- # val= 00:08:57.255 20:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # IFS=: 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # read -r var val 00:08:57.255 20:35:40 -- accel/accel.sh@21 -- # val= 00:08:57.255 20:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # IFS=: 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # read -r var val 00:08:57.255 20:35:40 -- accel/accel.sh@21 -- # val= 00:08:57.255 20:35:40 -- accel/accel.sh@22 -- # case "$var" in 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # IFS=: 00:08:57.255 20:35:40 -- accel/accel.sh@20 -- # read -r var val 00:08:57.255 20:35:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:57.255 20:35:40 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:08:57.255 20:35:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:57.255 00:08:57.255 real 0m5.240s 00:08:57.255 user 0m4.588s 00:08:57.255 sys 0m0.345s 00:08:57.255 ************************************ 00:08:57.255 END TEST accel_dif_generate 00:08:57.255 ************************************ 00:08:57.255 20:35:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.255 20:35:40 -- common/autotest_common.sh@10 -- # set +x 00:08:57.255 20:35:40 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:57.255 20:35:40 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:57.255 20:35:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.255 20:35:40 -- common/autotest_common.sh@10 -- # set +x 00:08:57.255 ************************************ 00:08:57.255 START TEST accel_dif_generate_copy 00:08:57.255 ************************************ 00:08:57.255 20:35:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:08:57.255 20:35:40 -- accel/accel.sh@16 -- # local accel_opc 00:08:57.255 20:35:40 -- accel/accel.sh@17 -- # local accel_module 00:08:57.255 20:35:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:08:57.255 20:35:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:57.255 20:35:40 -- accel/accel.sh@12 -- # build_accel_config 00:08:57.255 20:35:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:57.255 20:35:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:57.255 20:35:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:57.255 20:35:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:57.255 20:35:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:57.255 20:35:40 -- accel/accel.sh@41 -- # local IFS=, 00:08:57.255 20:35:40 -- accel/accel.sh@42 -- # jq -r . 00:08:57.513 [2024-04-15 20:35:40.828842] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:57.513 [2024-04-15 20:35:40.829000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43434 ] 00:08:57.513 [2024-04-15 20:35:40.989579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.772 [2024-04-15 20:35:41.191658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.303 20:35:43 -- accel/accel.sh@18 -- # out=' 00:09:00.303 SPDK Configuration: 00:09:00.303 Core mask: 0x1 00:09:00.303 00:09:00.303 Accel Perf Configuration: 00:09:00.303 Workload Type: dif_generate_copy 00:09:00.303 Vector size: 4096 bytes 00:09:00.303 Transfer size: 4096 bytes 00:09:00.303 Vector count 1 00:09:00.303 Module: software 00:09:00.303 Queue depth: 32 00:09:00.303 Allocate depth: 32 00:09:00.303 # threads/core: 1 00:09:00.303 Run time: 1 seconds 00:09:00.303 Verify: No 00:09:00.303 00:09:00.303 Running for 1 seconds... 00:09:00.303 00:09:00.303 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:00.303 ------------------------------------------------------------------------------------ 00:09:00.303 0,0 54496/s 216 MiB/s 0 0 00:09:00.303 ==================================================================================== 00:09:00.303 Total 54496/s 212 MiB/s 0 0' 00:09:00.303 20:35:43 -- accel/accel.sh@20 -- # IFS=: 00:09:00.303 20:35:43 -- accel/accel.sh@20 -- # read -r var val 00:09:00.303 20:35:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:00.303 20:35:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:00.303 20:35:43 -- accel/accel.sh@12 -- # build_accel_config 00:09:00.303 20:35:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:00.303 20:35:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:00.303 20:35:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:00.303 20:35:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:00.303 20:35:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:00.303 20:35:43 -- accel/accel.sh@41 -- # local IFS=, 00:09:00.303 20:35:43 -- accel/accel.sh@42 -- # jq -r . 00:09:00.303 [2024-04-15 20:35:43.421049] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:09:00.303 [2024-04-15 20:35:43.421214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43472 ] 00:09:00.303 [2024-04-15 20:35:43.593772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.303 [2024-04-15 20:35:43.794496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val= 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val= 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val=0x1 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val= 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val= 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val= 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val=software 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@23 -- # accel_module=software 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val=32 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val=32 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val=1 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.562 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.562 20:35:44 -- accel/accel.sh@21 -- # val=No 00:09:00.562 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.563 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.563 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.563 20:35:44 -- accel/accel.sh@21 -- # val= 00:09:00.563 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.563 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.563 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:00.563 20:35:44 -- accel/accel.sh@21 -- # val= 00:09:00.563 20:35:44 -- accel/accel.sh@22 -- # case "$var" in 00:09:00.563 20:35:44 -- accel/accel.sh@20 -- # IFS=: 00:09:00.563 20:35:44 -- accel/accel.sh@20 -- # read -r var val 00:09:02.524 20:35:45 -- accel/accel.sh@21 -- # val= 00:09:02.524 20:35:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # IFS=: 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # read -r var val 00:09:02.524 20:35:45 -- accel/accel.sh@21 -- # val= 00:09:02.524 20:35:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # IFS=: 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # read -r var val 00:09:02.524 20:35:45 -- accel/accel.sh@21 -- # val= 00:09:02.524 20:35:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # IFS=: 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # read -r var val 00:09:02.524 20:35:45 -- accel/accel.sh@21 -- # val= 00:09:02.524 20:35:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # IFS=: 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # read -r var val 00:09:02.524 20:35:45 -- accel/accel.sh@21 -- # val= 00:09:02.524 20:35:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # IFS=: 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # read -r var val 00:09:02.524 20:35:45 -- accel/accel.sh@21 -- # val= 00:09:02.524 20:35:45 -- accel/accel.sh@22 -- # case "$var" in 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # IFS=: 00:09:02.524 20:35:45 -- accel/accel.sh@20 -- # read -r var val 00:09:02.524 20:35:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:02.524 20:35:45 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:09:02.524 20:35:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:02.524 00:09:02.524 real 0m5.214s 00:09:02.524 user 0m4.579s 00:09:02.524 sys 0m0.354s 00:09:02.524 20:35:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.524 20:35:45 -- common/autotest_common.sh@10 -- # set +x 00:09:02.524 ************************************ 00:09:02.524 END TEST accel_dif_generate_copy 00:09:02.524 ************************************ 00:09:02.524 20:35:45 -- accel/accel.sh@107 -- # [[ n == y ]] 00:09:02.524 20:35:45 -- accel/accel.sh@116 -- # [[ n == y ]] 00:09:02.524 20:35:45 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:02.524 20:35:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:02.524 20:35:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.524 20:35:45 -- common/autotest_common.sh@10 -- # set +x 00:09:02.524 20:35:45 -- accel/accel.sh@129 -- # build_accel_config 00:09:02.524 20:35:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:02.524 20:35:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:02.524 20:35:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:02.524 20:35:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:02.524 20:35:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:02.524 20:35:45 -- accel/accel.sh@41 -- # local IFS=, 00:09:02.524 20:35:45 -- accel/accel.sh@42 -- # jq -r . 00:09:02.524 ************************************ 00:09:02.524 START TEST accel_dif_functional_tests 00:09:02.524 ************************************ 00:09:02.524 20:35:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:02.783 [2024-04-15 20:35:46.107759] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:09:02.783 [2024-04-15 20:35:46.107900] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43527 ] 00:09:02.783 [2024-04-15 20:35:46.258134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:03.042 [2024-04-15 20:35:46.482342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.042 [2024-04-15 20:35:46.482487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.042 [2024-04-15 20:35:46.482487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.609 00:09:03.609 00:09:03.609 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.609 http://cunit.sourceforge.net/ 00:09:03.609 00:09:03.609 00:09:03.609 Suite: accel_dif 00:09:03.609 Test: verify: DIF generated, GUARD check ...passed 00:09:03.609 Test: verify: DIF generated, APPTAG check ...passed 00:09:03.609 Test: verify: DIF generated, REFTAG check ...passed 00:09:03.609 Test: verify: DIF not generated, GUARD check ...[2024-04-15 20:35:46.906546] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:03.609 passed 00:09:03.609 Test: verify: DIF not generated, APPTAG check ...[2024-04-15 20:35:46.906885] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:03.609 [2024-04-15 20:35:46.906969] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:03.609 passed 00:09:03.609 Test: verify: DIF not generated, REFTAG check ...[2024-04-15 20:35:46.907056] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:03.609 passed 00:09:03.609 Test: verify: APPTAG correct, APPTAG check ...[2024-04-15 20:35:46.907380] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:03.609 [2024-04-15 20:35:46.907460] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:03.609 passed 00:09:03.609 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:09:03.609 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-04-15 20:35:46.907841] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:03.609 passed 00:09:03.609 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:03.609 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:03.609 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:09:03.609 Test: generate copy: DIF generated, GUARD check ...passed 00:09:03.609 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:03.609 Test: generate copy: DIF generated, REFTAG check ...[2024-04-15 20:35:46.908472] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:03.609 passed 00:09:03.609 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:03.609 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:03.609 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:03.609 Test: generate copy: iovecs-len validate ...[2024-04-15 20:35:46.909183] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:03.609 passed 00:09:03.609 Test: generate copy: buffer alignment validate ...passed 00:09:03.609 00:09:03.609 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.609 suites 1 1 n/a 0 0 00:09:03.609 tests 20 20 20 0 0 00:09:03.609 asserts 204 204 204 0 n/a 00:09:03.609 00:09:03.609 Elapsed time = 0.010 seconds 00:09:04.984 ************************************ 00:09:04.984 END TEST accel_dif_functional_tests 00:09:04.984 ************************************ 00:09:04.984 00:09:04.984 real 0m2.375s 00:09:04.984 user 0m4.865s 00:09:04.984 sys 0m0.222s 00:09:04.984 20:35:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.984 20:35:48 -- common/autotest_common.sh@10 -- # set +x 00:09:04.984 ************************************ 00:09:04.984 END TEST accel 00:09:04.984 ************************************ 00:09:04.984 00:09:04.984 real 1m18.907s 00:09:04.984 user 1m11.317s 00:09:04.984 sys 0m6.396s 00:09:04.984 20:35:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.984 20:35:48 -- common/autotest_common.sh@10 -- # set +x 00:09:04.984 20:35:48 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:04.984 20:35:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:04.984 20:35:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.984 20:35:48 -- common/autotest_common.sh@10 -- # set +x 00:09:04.984 ************************************ 00:09:04.984 START TEST accel_rpc 00:09:04.984 ************************************ 00:09:04.984 20:35:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:05.243 * Looking for test storage... 00:09:05.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:05.243 20:35:48 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:05.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.243 20:35:48 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=43630 00:09:05.243 20:35:48 -- accel/accel_rpc.sh@15 -- # waitforlisten 43630 00:09:05.243 20:35:48 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:05.243 20:35:48 -- common/autotest_common.sh@819 -- # '[' -z 43630 ']' 00:09:05.243 20:35:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.243 20:35:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:05.243 20:35:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.243 20:35:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:05.243 20:35:48 -- common/autotest_common.sh@10 -- # set +x 00:09:05.243 [2024-04-15 20:35:48.725211] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:09:05.243 [2024-04-15 20:35:48.725371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43630 ] 00:09:05.502 [2024-04-15 20:35:48.880788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.760 [2024-04-15 20:35:49.050729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:05.760 [2024-04-15 20:35:49.050916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.018 20:35:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:06.018 20:35:49 -- common/autotest_common.sh@852 -- # return 0 00:09:06.018 20:35:49 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:06.018 20:35:49 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:06.018 20:35:49 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:06.018 20:35:49 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:06.018 20:35:49 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:06.018 20:35:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:06.018 20:35:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.018 20:35:49 -- common/autotest_common.sh@10 -- # set +x 00:09:06.018 ************************************ 00:09:06.018 START TEST accel_assign_opcode 00:09:06.018 ************************************ 00:09:06.018 20:35:49 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:09:06.018 20:35:49 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:06.018 20:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:06.019 20:35:49 -- common/autotest_common.sh@10 -- # set +x 00:09:06.019 [2024-04-15 20:35:49.437052] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:06.019 20:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:06.019 20:35:49 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:06.019 20:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:06.019 20:35:49 -- common/autotest_common.sh@10 -- # set +x 00:09:06.019 [2024-04-15 20:35:49.449016] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:06.019 20:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:06.019 20:35:49 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:06.019 20:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:06.019 20:35:49 -- common/autotest_common.sh@10 -- # set +x 00:09:06.586 20:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:06.586 20:35:50 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:06.586 20:35:50 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:06.586 20:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:06.586 20:35:50 -- common/autotest_common.sh@10 -- # set +x 00:09:06.586 20:35:50 -- accel/accel_rpc.sh@42 -- # grep software 00:09:06.843 20:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:06.843 software 00:09:06.843 ************************************ 00:09:06.843 END TEST accel_assign_opcode 00:09:06.843 ************************************ 00:09:06.843 00:09:06.843 real 0m0.691s 00:09:06.843 user 0m0.052s 00:09:06.843 sys 0m0.013s 00:09:06.843 20:35:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.843 20:35:50 -- common/autotest_common.sh@10 -- # set +x 00:09:06.843 20:35:50 -- accel/accel_rpc.sh@55 -- # killprocess 43630 00:09:06.843 20:35:50 -- common/autotest_common.sh@926 -- # '[' -z 43630 ']' 00:09:06.843 20:35:50 -- common/autotest_common.sh@930 -- # kill -0 43630 00:09:06.843 20:35:50 -- common/autotest_common.sh@931 -- # uname 00:09:06.843 20:35:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:06.843 20:35:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 43630 00:09:06.843 killing process with pid 43630 00:09:06.843 20:35:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:06.843 20:35:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:06.843 20:35:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 43630' 00:09:06.843 20:35:50 -- common/autotest_common.sh@945 -- # kill 43630 00:09:06.843 20:35:50 -- common/autotest_common.sh@950 -- # wait 43630 00:09:09.369 00:09:09.369 real 0m3.791s 00:09:09.369 user 0m3.522s 00:09:09.369 sys 0m0.559s 00:09:09.369 ************************************ 00:09:09.369 END TEST accel_rpc 00:09:09.369 ************************************ 00:09:09.369 20:35:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.369 20:35:52 -- common/autotest_common.sh@10 -- # set +x 00:09:09.370 20:35:52 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:09.370 20:35:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:09.370 20:35:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:09.370 20:35:52 -- common/autotest_common.sh@10 -- # set +x 00:09:09.370 ************************************ 00:09:09.370 START TEST app_cmdline 00:09:09.370 ************************************ 00:09:09.370 20:35:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:09.370 * Looking for test storage... 00:09:09.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:09.370 20:35:52 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:09.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.370 20:35:52 -- app/cmdline.sh@17 -- # spdk_tgt_pid=43771 00:09:09.370 20:35:52 -- app/cmdline.sh@18 -- # waitforlisten 43771 00:09:09.370 20:35:52 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:09.370 20:35:52 -- common/autotest_common.sh@819 -- # '[' -z 43771 ']' 00:09:09.370 20:35:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.370 20:35:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:09.370 20:35:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.370 20:35:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:09.370 20:35:52 -- common/autotest_common.sh@10 -- # set +x 00:09:09.370 [2024-04-15 20:35:52.567224] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:09:09.370 [2024-04-15 20:35:52.567379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43771 ] 00:09:09.370 [2024-04-15 20:35:52.759104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.627 [2024-04-15 20:35:52.929920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:09.627 [2024-04-15 20:35:52.930109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.561 20:35:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:10.561 20:35:53 -- common/autotest_common.sh@852 -- # return 0 00:09:10.561 20:35:53 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:10.820 { 00:09:10.820 "version": "SPDK v24.01.1-pre git sha1 3b33f4333", 00:09:10.820 "fields": { 00:09:10.820 "major": 24, 00:09:10.820 "minor": 1, 00:09:10.820 "patch": 1, 00:09:10.820 "suffix": "-pre", 00:09:10.820 "commit": "3b33f4333" 00:09:10.820 } 00:09:10.820 } 00:09:10.820 20:35:54 -- app/cmdline.sh@22 -- # expected_methods=() 00:09:10.820 20:35:54 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:10.820 20:35:54 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:10.820 20:35:54 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:10.820 20:35:54 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:10.820 20:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.820 20:35:54 -- common/autotest_common.sh@10 -- # set +x 00:09:10.820 20:35:54 -- app/cmdline.sh@26 -- # sort 00:09:10.820 20:35:54 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:10.820 20:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.820 20:35:54 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:10.820 20:35:54 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:10.820 20:35:54 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:10.820 20:35:54 -- common/autotest_common.sh@640 -- # local es=0 00:09:10.820 20:35:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:10.820 20:35:54 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.820 20:35:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.820 20:35:54 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.820 20:35:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.820 20:35:54 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.820 20:35:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:10.820 20:35:54 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.820 20:35:54 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:10.820 20:35:54 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:11.079 request: 00:09:11.079 { 00:09:11.079 "method": "env_dpdk_get_mem_stats", 00:09:11.079 "req_id": 1 00:09:11.079 } 00:09:11.079 Got JSON-RPC error response 00:09:11.079 response: 00:09:11.079 { 00:09:11.079 "code": -32601, 00:09:11.079 "message": "Method not found" 00:09:11.079 } 00:09:11.079 20:35:54 -- common/autotest_common.sh@643 -- # es=1 00:09:11.079 20:35:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:11.079 20:35:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:11.079 20:35:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:11.079 20:35:54 -- app/cmdline.sh@1 -- # killprocess 43771 00:09:11.079 20:35:54 -- common/autotest_common.sh@926 -- # '[' -z 43771 ']' 00:09:11.079 20:35:54 -- common/autotest_common.sh@930 -- # kill -0 43771 00:09:11.079 20:35:54 -- common/autotest_common.sh@931 -- # uname 00:09:11.079 20:35:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:11.079 20:35:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 43771 00:09:11.079 killing process with pid 43771 00:09:11.079 20:35:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:11.079 20:35:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:11.079 20:35:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 43771' 00:09:11.079 20:35:54 -- common/autotest_common.sh@945 -- # kill 43771 00:09:11.079 20:35:54 -- common/autotest_common.sh@950 -- # wait 43771 00:09:12.982 00:09:12.982 real 0m4.171s 00:09:12.982 user 0m4.355s 00:09:12.982 sys 0m0.558s 00:09:12.982 20:35:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.982 20:35:56 -- common/autotest_common.sh@10 -- # set +x 00:09:12.982 ************************************ 00:09:12.982 END TEST app_cmdline 00:09:12.982 ************************************ 00:09:13.241 20:35:56 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:13.241 20:35:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:13.241 20:35:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:13.241 20:35:56 -- common/autotest_common.sh@10 -- # set +x 00:09:13.241 ************************************ 00:09:13.241 START TEST version 00:09:13.241 ************************************ 00:09:13.241 20:35:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:13.241 * Looking for test storage... 00:09:13.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:13.241 20:35:56 -- app/version.sh@17 -- # get_header_version major 00:09:13.241 20:35:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:13.241 20:35:56 -- app/version.sh@14 -- # cut -f2 00:09:13.241 20:35:56 -- app/version.sh@14 -- # tr -d '"' 00:09:13.241 20:35:56 -- app/version.sh@17 -- # major=24 00:09:13.241 20:35:56 -- app/version.sh@18 -- # get_header_version minor 00:09:13.241 20:35:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:13.241 20:35:56 -- app/version.sh@14 -- # cut -f2 00:09:13.241 20:35:56 -- app/version.sh@14 -- # tr -d '"' 00:09:13.241 20:35:56 -- app/version.sh@18 -- # minor=1 00:09:13.241 20:35:56 -- app/version.sh@19 -- # get_header_version patch 00:09:13.241 20:35:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:13.241 20:35:56 -- app/version.sh@14 -- # cut -f2 00:09:13.241 20:35:56 -- app/version.sh@14 -- # tr -d '"' 00:09:13.241 20:35:56 -- app/version.sh@19 -- # patch=1 00:09:13.241 20:35:56 -- app/version.sh@20 -- # get_header_version suffix 00:09:13.241 20:35:56 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:13.241 20:35:56 -- app/version.sh@14 -- # cut -f2 00:09:13.241 20:35:56 -- app/version.sh@14 -- # tr -d '"' 00:09:13.241 20:35:56 -- app/version.sh@20 -- # suffix=-pre 00:09:13.241 20:35:56 -- app/version.sh@22 -- # version=24.1 00:09:13.241 20:35:56 -- app/version.sh@25 -- # (( patch != 0 )) 00:09:13.241 20:35:56 -- app/version.sh@25 -- # version=24.1.1 00:09:13.241 20:35:56 -- app/version.sh@28 -- # version=24.1.1rc0 00:09:13.241 20:35:56 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:13.241 20:35:56 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:13.241 20:35:56 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:09:13.241 20:35:56 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:09:13.499 ************************************ 00:09:13.499 END TEST version 00:09:13.499 ************************************ 00:09:13.499 00:09:13.499 real 0m0.198s 00:09:13.499 user 0m0.122s 00:09:13.499 sys 0m0.125s 00:09:13.499 20:35:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.499 20:35:56 -- common/autotest_common.sh@10 -- # set +x 00:09:13.499 20:35:56 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:09:13.499 20:35:56 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:09:13.499 20:35:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:13.499 20:35:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:13.499 20:35:56 -- common/autotest_common.sh@10 -- # set +x 00:09:13.499 ************************************ 00:09:13.499 START TEST blockdev_general 00:09:13.499 ************************************ 00:09:13.499 20:35:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:09:13.499 * Looking for test storage... 00:09:13.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:13.499 20:35:56 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:13.499 20:35:56 -- bdev/nbd_common.sh@6 -- # set -e 00:09:13.499 20:35:56 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:13.499 20:35:56 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:13.499 20:35:56 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:13.500 20:35:56 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:13.500 20:35:56 -- bdev/blockdev.sh@18 -- # : 00:09:13.500 20:35:56 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:13.500 20:35:56 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:13.500 20:35:56 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:13.500 20:35:56 -- bdev/blockdev.sh@672 -- # uname -s 00:09:13.500 20:35:56 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:09:13.500 20:35:56 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:09:13.500 20:35:56 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:09:13.500 20:35:56 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:13.500 20:35:56 -- bdev/blockdev.sh@682 -- # dek= 00:09:13.500 20:35:56 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:13.500 20:35:56 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:13.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.500 20:35:56 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:13.500 20:35:56 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:09:13.500 20:35:56 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:09:13.500 20:35:56 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:13.500 20:35:56 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=43977 00:09:13.500 20:35:56 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:13.500 20:35:56 -- bdev/blockdev.sh@47 -- # waitforlisten 43977 00:09:13.500 20:35:56 -- common/autotest_common.sh@819 -- # '[' -z 43977 ']' 00:09:13.500 20:35:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.500 20:35:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:13.500 20:35:56 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:09:13.500 20:35:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.500 20:35:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:13.500 20:35:56 -- common/autotest_common.sh@10 -- # set +x 00:09:13.758 [2024-04-15 20:35:57.080977] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:09:13.758 [2024-04-15 20:35:57.081135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43977 ] 00:09:13.758 [2024-04-15 20:35:57.226614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.017 [2024-04-15 20:35:57.395905] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:14.017 [2024-04-15 20:35:57.396104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.585 20:35:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:14.585 20:35:57 -- common/autotest_common.sh@852 -- # return 0 00:09:14.585 20:35:57 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:09:14.585 20:35:57 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:09:14.585 20:35:57 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:09:14.585 20:35:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:14.585 20:35:57 -- common/autotest_common.sh@10 -- # set +x 00:09:15.152 [2024-04-15 20:35:58.437283] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:15.152 [2024-04-15 20:35:58.437365] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:15.152 00:09:15.152 [2024-04-15 20:35:58.445224] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:15.152 [2024-04-15 20:35:58.445263] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:15.152 00:09:15.152 Malloc0 00:09:15.152 Malloc1 00:09:15.152 Malloc2 00:09:15.152 Malloc3 00:09:15.152 Malloc4 00:09:15.152 Malloc5 00:09:15.412 Malloc6 00:09:15.412 Malloc7 00:09:15.412 Malloc8 00:09:15.412 Malloc9 00:09:15.412 [2024-04-15 20:35:58.774578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:15.412 [2024-04-15 20:35:58.774654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:15.412 [2024-04-15 20:35:58.774686] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:09:15.412 [2024-04-15 20:35:58.774711] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:15.412 [2024-04-15 20:35:58.776063] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:15.412 [2024-04-15 20:35:58.776104] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:15.412 TestPT 00:09:15.412 20:35:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.412 20:35:58 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:09:15.412 5000+0 records in 00:09:15.412 5000+0 records out 00:09:15.412 10240000 bytes (10 MB) copied, 0.0329432 s, 311 MB/s 00:09:15.412 20:35:58 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:09:15.412 20:35:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:15.412 20:35:58 -- common/autotest_common.sh@10 -- # set +x 00:09:15.412 AIO0 00:09:15.412 20:35:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.412 20:35:58 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:15.412 20:35:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:15.412 20:35:58 -- common/autotest_common.sh@10 -- # set +x 00:09:15.412 20:35:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.412 20:35:58 -- bdev/blockdev.sh@738 -- # cat 00:09:15.412 20:35:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:15.412 20:35:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:15.412 20:35:58 -- common/autotest_common.sh@10 -- # set +x 00:09:15.412 20:35:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.412 20:35:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:15.412 20:35:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:15.412 20:35:58 -- common/autotest_common.sh@10 -- # set +x 00:09:15.671 20:35:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.671 20:35:58 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:15.671 20:35:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:15.671 20:35:58 -- common/autotest_common.sh@10 -- # set +x 00:09:15.671 20:35:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.671 20:35:58 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:15.671 20:35:58 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:15.671 20:35:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:15.671 20:35:58 -- common/autotest_common.sh@10 -- # set +x 00:09:15.671 20:35:58 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:15.671 20:35:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:15.671 20:35:59 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:15.671 20:35:59 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:15.672 20:35:59 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "59a74dca-3346-4c44-ace2-b3011241ebb6"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "59a74dca-3346-4c44-ace2-b3011241ebb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0e5a6515-2578-5dea-b0b1-5b807f7add5b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0e5a6515-2578-5dea-b0b1-5b807f7add5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b8eff89b-eab8-5eba-86e3-f937175ef6be"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b8eff89b-eab8-5eba-86e3-f937175ef6be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b7064883-40b8-5375-a8d7-5a6c46885ecc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b7064883-40b8-5375-a8d7-5a6c46885ecc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "26f08935-8efc-5efb-9779-b9684b9fbe99"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "26f08935-8efc-5efb-9779-b9684b9fbe99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "094ebdb5-5980-5c61-94fe-b160668d82f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "094ebdb5-5980-5c61-94fe-b160668d82f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "227e057b-9753-50d2-9b8c-3f0d96eb6133"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "227e057b-9753-50d2-9b8c-3f0d96eb6133",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "59341c11-4a7f-5946-9fdd-5cb4d013c559"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "59341c11-4a7f-5946-9fdd-5cb4d013c559",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "d483fc83-bef3-5c88-a5b3-45d69a406c98"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d483fc83-bef3-5c88-a5b3-45d69a406c98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "06094ccb-6b2c-5f2e-8bc2-33ca2a56d15f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06094ccb-6b2c-5f2e-8bc2-33ca2a56d15f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "388bf8e8-84e1-59a2-9a65-3f77b31c0f3b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "388bf8e8-84e1-59a2-9a65-3f77b31c0f3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "e2e3e826-ba8f-5907-bbc1-febcb4e64ca4"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e2e3e826-ba8f-5907-bbc1-febcb4e64ca4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c48276d1-7ea3-4da5-8768-77cf170fc77e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c48276d1-7ea3-4da5-8768-77cf170fc77e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c48276d1-7ea3-4da5-8768-77cf170fc77e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "d498323b-3752-4a6d-b3eb-8ed5b0c62e1e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "931eb6ce-d6e6-412f-afb2-45c909211eee",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "ce526ab6-1d6e-4a60-abab-e30019fd6f49"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ce526ab6-1d6e-4a60-abab-e30019fd6f49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce526ab6-1d6e-4a60-abab-e30019fd6f49",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "231964b0-be82-4f31-9ca8-b7a5f4215df2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "1e9eb1a3-53f3-4de1-96ce-d5a8acd4d083",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a0555647-ebe4-4ccc-865b-89abd1cb142c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a0555647-ebe4-4ccc-865b-89abd1cb142c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a0555647-ebe4-4ccc-865b-89abd1cb142c",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "44d03e7e-fd0a-45a4-b022-341ffab2b88f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c33fd0f3-88ea-4839-800b-4c21b00a1a86",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5adc136b-42c3-47f6-b932-c50851bebf20"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5adc136b-42c3-47f6-b932-c50851bebf20",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:09:15.673 20:35:59 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:15.673 20:35:59 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:09:15.673 20:35:59 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:15.673 20:35:59 -- bdev/blockdev.sh@752 -- # killprocess 43977 00:09:15.673 20:35:59 -- common/autotest_common.sh@926 -- # '[' -z 43977 ']' 00:09:15.673 20:35:59 -- common/autotest_common.sh@930 -- # kill -0 43977 00:09:15.673 20:35:59 -- common/autotest_common.sh@931 -- # uname 00:09:15.673 20:35:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:15.673 20:35:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 43977 00:09:15.673 killing process with pid 43977 00:09:15.673 20:35:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:15.673 20:35:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:15.673 20:35:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 43977' 00:09:15.673 20:35:59 -- common/autotest_common.sh@945 -- # kill 43977 00:09:15.673 20:35:59 -- common/autotest_common.sh@950 -- # wait 43977 00:09:18.963 20:36:02 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:18.963 20:36:02 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:09:18.963 20:36:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:18.963 20:36:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:18.963 20:36:02 -- common/autotest_common.sh@10 -- # set +x 00:09:18.963 ************************************ 00:09:18.963 START TEST bdev_hello_world 00:09:18.963 ************************************ 00:09:18.963 20:36:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:09:18.963 [2024-04-15 20:36:02.247224] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:09:18.963 [2024-04-15 20:36:02.247393] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44069 ] 00:09:18.963 [2024-04-15 20:36:02.395655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.222 [2024-04-15 20:36:02.593921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.789 [2024-04-15 20:36:03.023722] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:19.789 [2024-04-15 20:36:03.023814] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:19.789 [2024-04-15 20:36:03.031670] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:19.789 [2024-04-15 20:36:03.031739] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:19.789 [2024-04-15 20:36:03.039691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:19.789 [2024-04-15 20:36:03.039741] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:19.789 [2024-04-15 20:36:03.039779] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:19.789 [2024-04-15 20:36:03.217248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:19.789 [2024-04-15 20:36:03.217345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.789 [2024-04-15 20:36:03.217396] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:09:19.789 [2024-04-15 20:36:03.217424] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.789 [2024-04-15 20:36:03.219056] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.789 [2024-04-15 20:36:03.219098] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:20.049 [2024-04-15 20:36:03.497677] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:20.049 [2024-04-15 20:36:03.497756] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:09:20.049 [2024-04-15 20:36:03.497813] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:20.049 [2024-04-15 20:36:03.497858] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:20.049 [2024-04-15 20:36:03.497923] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:20.049 [2024-04-15 20:36:03.497956] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:20.049 [2024-04-15 20:36:03.497995] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:20.049 00:09:20.049 [2024-04-15 20:36:03.498020] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:22.588 ************************************ 00:09:22.588 END TEST bdev_hello_world 00:09:22.588 ************************************ 00:09:22.588 00:09:22.588 real 0m3.464s 00:09:22.588 user 0m2.906s 00:09:22.588 sys 0m0.349s 00:09:22.588 20:36:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.588 20:36:05 -- common/autotest_common.sh@10 -- # set +x 00:09:22.588 20:36:05 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:22.588 20:36:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:22.588 20:36:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:22.588 20:36:05 -- common/autotest_common.sh@10 -- # set +x 00:09:22.588 ************************************ 00:09:22.588 START TEST bdev_bounds 00:09:22.588 ************************************ 00:09:22.588 20:36:05 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:09:22.588 20:36:05 -- bdev/blockdev.sh@288 -- # bdevio_pid=44131 00:09:22.588 20:36:05 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:22.588 Process bdevio pid: 44131 00:09:22.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.588 20:36:05 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:22.588 20:36:05 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 44131' 00:09:22.588 20:36:05 -- bdev/blockdev.sh@291 -- # waitforlisten 44131 00:09:22.588 20:36:05 -- common/autotest_common.sh@819 -- # '[' -z 44131 ']' 00:09:22.588 20:36:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.588 20:36:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:22.588 20:36:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.588 20:36:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:22.588 20:36:05 -- common/autotest_common.sh@10 -- # set +x 00:09:22.588 [2024-04-15 20:36:05.777072] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:09:22.588 [2024-04-15 20:36:05.777230] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44131 ] 00:09:22.588 [2024-04-15 20:36:05.947910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:22.846 [2024-04-15 20:36:06.147112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.846 [2024-04-15 20:36:06.147129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.846 [2024-04-15 20:36:06.147129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.104 [2024-04-15 20:36:06.594954] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:23.104 [2024-04-15 20:36:06.595042] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:23.104 [2024-04-15 20:36:06.602914] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:23.104 [2024-04-15 20:36:06.603002] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:23.363 [2024-04-15 20:36:06.610936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:23.363 [2024-04-15 20:36:06.610984] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:23.363 [2024-04-15 20:36:06.611002] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:23.363 [2024-04-15 20:36:06.793468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:23.363 [2024-04-15 20:36:06.793561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.363 [2024-04-15 20:36:06.793620] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:09:23.363 [2024-04-15 20:36:06.793837] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.363 [2024-04-15 20:36:06.795378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.363 [2024-04-15 20:36:06.795420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:23.967 20:36:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:23.967 20:36:07 -- common/autotest_common.sh@852 -- # return 0 00:09:23.967 20:36:07 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:23.967 I/O targets: 00:09:23.967 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:09:23.967 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:09:23.967 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:09:23.967 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:09:23.967 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:09:23.967 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:09:23.967 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:09:23.967 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:09:23.967 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:09:23.967 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:09:23.967 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:09:23.967 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:09:23.967 raid0: 131072 blocks of 512 bytes (64 MiB) 00:09:23.967 concat0: 131072 blocks of 512 bytes (64 MiB) 00:09:23.967 raid1: 65536 blocks of 512 bytes (32 MiB) 00:09:23.967 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:09:23.967 00:09:23.967 00:09:23.967 CUnit - A unit testing framework for C - Version 2.1-3 00:09:23.967 http://cunit.sourceforge.net/ 00:09:23.967 00:09:23.967 00:09:23.967 Suite: bdevio tests on: AIO0 00:09:23.967 Test: blockdev write read block ...passed 00:09:23.967 Test: blockdev write zeroes read block ...passed 00:09:23.967 Test: blockdev write zeroes read no split ...passed 00:09:23.967 Test: blockdev write zeroes read split ...passed 00:09:23.967 Test: blockdev write zeroes read split partial ...passed 00:09:23.967 Test: blockdev reset ...passed 00:09:23.967 Test: blockdev write read 8 blocks ...passed 00:09:23.967 Test: blockdev write read size > 128k ...passed 00:09:23.967 Test: blockdev write read invalid size ...passed 00:09:23.967 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:23.967 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:23.967 Test: blockdev write read max offset ...passed 00:09:23.967 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:23.967 Test: blockdev writev readv 8 blocks ...passed 00:09:23.967 Test: blockdev writev readv 30 x 1block ...passed 00:09:23.967 Test: blockdev writev readv block ...passed 00:09:23.967 Test: blockdev writev readv size > 128k ...passed 00:09:23.967 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:23.967 Test: blockdev comparev and writev ...passed 00:09:23.967 Test: blockdev nvme passthru rw ...passed 00:09:23.967 Test: blockdev nvme passthru vendor specific ...passed 00:09:23.967 Test: blockdev nvme admin passthru ...passed 00:09:23.967 Test: blockdev copy ...passed 00:09:23.967 Suite: bdevio tests on: raid1 00:09:23.967 Test: blockdev write read block ...passed 00:09:23.967 Test: blockdev write zeroes read block ...passed 00:09:23.967 Test: blockdev write zeroes read no split ...passed 00:09:24.227 Test: blockdev write zeroes read split ...passed 00:09:24.227 Test: blockdev write zeroes read split partial ...passed 00:09:24.227 Test: blockdev reset ...passed 00:09:24.227 Test: blockdev write read 8 blocks ...passed 00:09:24.227 Test: blockdev write read size > 128k ...passed 00:09:24.227 Test: blockdev write read invalid size ...passed 00:09:24.227 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.227 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.227 Test: blockdev write read max offset ...passed 00:09:24.227 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.227 Test: blockdev writev readv 8 blocks ...passed 00:09:24.227 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.227 Test: blockdev writev readv block ...passed 00:09:24.227 Test: blockdev writev readv size > 128k ...passed 00:09:24.227 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.227 Test: blockdev comparev and writev ...passed 00:09:24.227 Test: blockdev nvme passthru rw ...passed 00:09:24.227 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.227 Test: blockdev nvme admin passthru ...passed 00:09:24.227 Test: blockdev copy ...passed 00:09:24.227 Suite: bdevio tests on: concat0 00:09:24.227 Test: blockdev write read block ...passed 00:09:24.227 Test: blockdev write zeroes read block ...passed 00:09:24.227 Test: blockdev write zeroes read no split ...passed 00:09:24.227 Test: blockdev write zeroes read split ...passed 00:09:24.227 Test: blockdev write zeroes read split partial ...passed 00:09:24.227 Test: blockdev reset ...passed 00:09:24.227 Test: blockdev write read 8 blocks ...passed 00:09:24.227 Test: blockdev write read size > 128k ...passed 00:09:24.227 Test: blockdev write read invalid size ...passed 00:09:24.227 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.227 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.227 Test: blockdev write read max offset ...passed 00:09:24.227 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.227 Test: blockdev writev readv 8 blocks ...passed 00:09:24.227 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.227 Test: blockdev writev readv block ...passed 00:09:24.227 Test: blockdev writev readv size > 128k ...passed 00:09:24.227 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.227 Test: blockdev comparev and writev ...passed 00:09:24.227 Test: blockdev nvme passthru rw ...passed 00:09:24.227 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.227 Test: blockdev nvme admin passthru ...passed 00:09:24.227 Test: blockdev copy ...passed 00:09:24.227 Suite: bdevio tests on: raid0 00:09:24.227 Test: blockdev write read block ...passed 00:09:24.227 Test: blockdev write zeroes read block ...passed 00:09:24.227 Test: blockdev write zeroes read no split ...passed 00:09:24.227 Test: blockdev write zeroes read split ...passed 00:09:24.227 Test: blockdev write zeroes read split partial ...passed 00:09:24.227 Test: blockdev reset ...passed 00:09:24.227 Test: blockdev write read 8 blocks ...passed 00:09:24.227 Test: blockdev write read size > 128k ...passed 00:09:24.227 Test: blockdev write read invalid size ...passed 00:09:24.227 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.227 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.227 Test: blockdev write read max offset ...passed 00:09:24.227 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.227 Test: blockdev writev readv 8 blocks ...passed 00:09:24.227 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.227 Test: blockdev writev readv block ...passed 00:09:24.227 Test: blockdev writev readv size > 128k ...passed 00:09:24.227 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.227 Test: blockdev comparev and writev ...passed 00:09:24.227 Test: blockdev nvme passthru rw ...passed 00:09:24.227 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.227 Test: blockdev nvme admin passthru ...passed 00:09:24.227 Test: blockdev copy ...passed 00:09:24.227 Suite: bdevio tests on: TestPT 00:09:24.227 Test: blockdev write read block ...passed 00:09:24.227 Test: blockdev write zeroes read block ...passed 00:09:24.227 Test: blockdev write zeroes read no split ...passed 00:09:24.227 Test: blockdev write zeroes read split ...passed 00:09:24.227 Test: blockdev write zeroes read split partial ...passed 00:09:24.227 Test: blockdev reset ...passed 00:09:24.487 Test: blockdev write read 8 blocks ...passed 00:09:24.487 Test: blockdev write read size > 128k ...passed 00:09:24.487 Test: blockdev write read invalid size ...passed 00:09:24.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.487 Test: blockdev write read max offset ...passed 00:09:24.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.487 Test: blockdev writev readv 8 blocks ...passed 00:09:24.487 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.487 Test: blockdev writev readv block ...passed 00:09:24.487 Test: blockdev writev readv size > 128k ...passed 00:09:24.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.487 Test: blockdev comparev and writev ...passed 00:09:24.487 Test: blockdev nvme passthru rw ...passed 00:09:24.487 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.487 Test: blockdev nvme admin passthru ...passed 00:09:24.487 Test: blockdev copy ...passed 00:09:24.487 Suite: bdevio tests on: Malloc2p7 00:09:24.487 Test: blockdev write read block ...passed 00:09:24.487 Test: blockdev write zeroes read block ...passed 00:09:24.487 Test: blockdev write zeroes read no split ...passed 00:09:24.487 Test: blockdev write zeroes read split ...passed 00:09:24.487 Test: blockdev write zeroes read split partial ...passed 00:09:24.487 Test: blockdev reset ...passed 00:09:24.487 Test: blockdev write read 8 blocks ...passed 00:09:24.487 Test: blockdev write read size > 128k ...passed 00:09:24.487 Test: blockdev write read invalid size ...passed 00:09:24.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.487 Test: blockdev write read max offset ...passed 00:09:24.487 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.487 Test: blockdev writev readv 8 blocks ...passed 00:09:24.487 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.487 Test: blockdev writev readv block ...passed 00:09:24.487 Test: blockdev writev readv size > 128k ...passed 00:09:24.487 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.487 Test: blockdev comparev and writev ...passed 00:09:24.487 Test: blockdev nvme passthru rw ...passed 00:09:24.487 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.487 Test: blockdev nvme admin passthru ...passed 00:09:24.487 Test: blockdev copy ...passed 00:09:24.487 Suite: bdevio tests on: Malloc2p6 00:09:24.487 Test: blockdev write read block ...passed 00:09:24.487 Test: blockdev write zeroes read block ...passed 00:09:24.487 Test: blockdev write zeroes read no split ...passed 00:09:24.487 Test: blockdev write zeroes read split ...passed 00:09:24.487 Test: blockdev write zeroes read split partial ...passed 00:09:24.487 Test: blockdev reset ...passed 00:09:24.487 Test: blockdev write read 8 blocks ...passed 00:09:24.487 Test: blockdev write read size > 128k ...passed 00:09:24.487 Test: blockdev write read invalid size ...passed 00:09:24.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.488 Test: blockdev write read max offset ...passed 00:09:24.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.488 Test: blockdev writev readv 8 blocks ...passed 00:09:24.488 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.488 Test: blockdev writev readv block ...passed 00:09:24.488 Test: blockdev writev readv size > 128k ...passed 00:09:24.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.488 Test: blockdev comparev and writev ...passed 00:09:24.488 Test: blockdev nvme passthru rw ...passed 00:09:24.488 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.488 Test: blockdev nvme admin passthru ...passed 00:09:24.488 Test: blockdev copy ...passed 00:09:24.488 Suite: bdevio tests on: Malloc2p5 00:09:24.488 Test: blockdev write read block ...passed 00:09:24.488 Test: blockdev write zeroes read block ...passed 00:09:24.488 Test: blockdev write zeroes read no split ...passed 00:09:24.488 Test: blockdev write zeroes read split ...passed 00:09:24.488 Test: blockdev write zeroes read split partial ...passed 00:09:24.488 Test: blockdev reset ...passed 00:09:24.488 Test: blockdev write read 8 blocks ...passed 00:09:24.488 Test: blockdev write read size > 128k ...passed 00:09:24.488 Test: blockdev write read invalid size ...passed 00:09:24.488 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.488 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.488 Test: blockdev write read max offset ...passed 00:09:24.488 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.488 Test: blockdev writev readv 8 blocks ...passed 00:09:24.488 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.488 Test: blockdev writev readv block ...passed 00:09:24.488 Test: blockdev writev readv size > 128k ...passed 00:09:24.488 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.488 Test: blockdev comparev and writev ...passed 00:09:24.488 Test: blockdev nvme passthru rw ...passed 00:09:24.488 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.488 Test: blockdev nvme admin passthru ...passed 00:09:24.488 Test: blockdev copy ...passed 00:09:24.488 Suite: bdevio tests on: Malloc2p4 00:09:24.488 Test: blockdev write read block ...passed 00:09:24.488 Test: blockdev write zeroes read block ...passed 00:09:24.488 Test: blockdev write zeroes read no split ...passed 00:09:24.488 Test: blockdev write zeroes read split ...passed 00:09:24.747 Test: blockdev write zeroes read split partial ...passed 00:09:24.747 Test: blockdev reset ...passed 00:09:24.747 Test: blockdev write read 8 blocks ...passed 00:09:24.747 Test: blockdev write read size > 128k ...passed 00:09:24.747 Test: blockdev write read invalid size ...passed 00:09:24.747 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.747 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.747 Test: blockdev write read max offset ...passed 00:09:24.747 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.747 Test: blockdev writev readv 8 blocks ...passed 00:09:24.747 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.747 Test: blockdev writev readv block ...passed 00:09:24.747 Test: blockdev writev readv size > 128k ...passed 00:09:24.747 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.747 Test: blockdev comparev and writev ...passed 00:09:24.747 Test: blockdev nvme passthru rw ...passed 00:09:24.747 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.747 Test: blockdev nvme admin passthru ...passed 00:09:24.747 Test: blockdev copy ...passed 00:09:24.747 Suite: bdevio tests on: Malloc2p3 00:09:24.747 Test: blockdev write read block ...passed 00:09:24.747 Test: blockdev write zeroes read block ...passed 00:09:24.747 Test: blockdev write zeroes read no split ...passed 00:09:24.747 Test: blockdev write zeroes read split ...passed 00:09:24.747 Test: blockdev write zeroes read split partial ...passed 00:09:24.747 Test: blockdev reset ...passed 00:09:24.747 Test: blockdev write read 8 blocks ...passed 00:09:24.747 Test: blockdev write read size > 128k ...passed 00:09:24.747 Test: blockdev write read invalid size ...passed 00:09:24.747 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.747 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.747 Test: blockdev write read max offset ...passed 00:09:24.747 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.747 Test: blockdev writev readv 8 blocks ...passed 00:09:24.747 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.747 Test: blockdev writev readv block ...passed 00:09:24.747 Test: blockdev writev readv size > 128k ...passed 00:09:24.747 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.747 Test: blockdev comparev and writev ...passed 00:09:24.747 Test: blockdev nvme passthru rw ...passed 00:09:24.747 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.747 Test: blockdev nvme admin passthru ...passed 00:09:24.747 Test: blockdev copy ...passed 00:09:24.747 Suite: bdevio tests on: Malloc2p2 00:09:24.747 Test: blockdev write read block ...passed 00:09:24.747 Test: blockdev write zeroes read block ...passed 00:09:24.747 Test: blockdev write zeroes read no split ...passed 00:09:24.747 Test: blockdev write zeroes read split ...passed 00:09:24.747 Test: blockdev write zeroes read split partial ...passed 00:09:24.747 Test: blockdev reset ...passed 00:09:24.747 Test: blockdev write read 8 blocks ...passed 00:09:24.747 Test: blockdev write read size > 128k ...passed 00:09:24.747 Test: blockdev write read invalid size ...passed 00:09:24.747 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.747 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.747 Test: blockdev write read max offset ...passed 00:09:24.747 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.747 Test: blockdev writev readv 8 blocks ...passed 00:09:24.747 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.747 Test: blockdev writev readv block ...passed 00:09:24.747 Test: blockdev writev readv size > 128k ...passed 00:09:24.747 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.747 Test: blockdev comparev and writev ...passed 00:09:24.747 Test: blockdev nvme passthru rw ...passed 00:09:24.747 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.747 Test: blockdev nvme admin passthru ...passed 00:09:24.747 Test: blockdev copy ...passed 00:09:24.747 Suite: bdevio tests on: Malloc2p1 00:09:24.747 Test: blockdev write read block ...passed 00:09:24.747 Test: blockdev write zeroes read block ...passed 00:09:24.747 Test: blockdev write zeroes read no split ...passed 00:09:24.747 Test: blockdev write zeroes read split ...passed 00:09:24.747 Test: blockdev write zeroes read split partial ...passed 00:09:24.747 Test: blockdev reset ...passed 00:09:24.747 Test: blockdev write read 8 blocks ...passed 00:09:24.747 Test: blockdev write read size > 128k ...passed 00:09:24.747 Test: blockdev write read invalid size ...passed 00:09:24.747 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.747 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.747 Test: blockdev write read max offset ...passed 00:09:24.747 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.747 Test: blockdev writev readv 8 blocks ...passed 00:09:24.747 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.747 Test: blockdev writev readv block ...passed 00:09:24.747 Test: blockdev writev readv size > 128k ...passed 00:09:24.747 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.747 Test: blockdev comparev and writev ...passed 00:09:24.747 Test: blockdev nvme passthru rw ...passed 00:09:24.747 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.747 Test: blockdev nvme admin passthru ...passed 00:09:24.747 Test: blockdev copy ...passed 00:09:24.747 Suite: bdevio tests on: Malloc2p0 00:09:24.747 Test: blockdev write read block ...passed 00:09:24.747 Test: blockdev write zeroes read block ...passed 00:09:24.747 Test: blockdev write zeroes read no split ...passed 00:09:24.747 Test: blockdev write zeroes read split ...passed 00:09:25.008 Test: blockdev write zeroes read split partial ...passed 00:09:25.008 Test: blockdev reset ...passed 00:09:25.008 Test: blockdev write read 8 blocks ...passed 00:09:25.008 Test: blockdev write read size > 128k ...passed 00:09:25.008 Test: blockdev write read invalid size ...passed 00:09:25.008 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.008 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.008 Test: blockdev write read max offset ...passed 00:09:25.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.008 Test: blockdev writev readv 8 blocks ...passed 00:09:25.008 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.008 Test: blockdev writev readv block ...passed 00:09:25.008 Test: blockdev writev readv size > 128k ...passed 00:09:25.008 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.008 Test: blockdev comparev and writev ...passed 00:09:25.008 Test: blockdev nvme passthru rw ...passed 00:09:25.008 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.008 Test: blockdev nvme admin passthru ...passed 00:09:25.008 Test: blockdev copy ...passed 00:09:25.008 Suite: bdevio tests on: Malloc1p1 00:09:25.008 Test: blockdev write read block ...passed 00:09:25.008 Test: blockdev write zeroes read block ...passed 00:09:25.008 Test: blockdev write zeroes read no split ...passed 00:09:25.008 Test: blockdev write zeroes read split ...passed 00:09:25.008 Test: blockdev write zeroes read split partial ...passed 00:09:25.008 Test: blockdev reset ...passed 00:09:25.008 Test: blockdev write read 8 blocks ...passed 00:09:25.008 Test: blockdev write read size > 128k ...passed 00:09:25.008 Test: blockdev write read invalid size ...passed 00:09:25.008 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.008 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.008 Test: blockdev write read max offset ...passed 00:09:25.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.008 Test: blockdev writev readv 8 blocks ...passed 00:09:25.008 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.008 Test: blockdev writev readv block ...passed 00:09:25.008 Test: blockdev writev readv size > 128k ...passed 00:09:25.008 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.008 Test: blockdev comparev and writev ...passed 00:09:25.008 Test: blockdev nvme passthru rw ...passed 00:09:25.008 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.008 Test: blockdev nvme admin passthru ...passed 00:09:25.008 Test: blockdev copy ...passed 00:09:25.008 Suite: bdevio tests on: Malloc1p0 00:09:25.008 Test: blockdev write read block ...passed 00:09:25.008 Test: blockdev write zeroes read block ...passed 00:09:25.008 Test: blockdev write zeroes read no split ...passed 00:09:25.008 Test: blockdev write zeroes read split ...passed 00:09:25.008 Test: blockdev write zeroes read split partial ...passed 00:09:25.008 Test: blockdev reset ...passed 00:09:25.008 Test: blockdev write read 8 blocks ...passed 00:09:25.008 Test: blockdev write read size > 128k ...passed 00:09:25.008 Test: blockdev write read invalid size ...passed 00:09:25.008 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.008 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.008 Test: blockdev write read max offset ...passed 00:09:25.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.008 Test: blockdev writev readv 8 blocks ...passed 00:09:25.008 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.008 Test: blockdev writev readv block ...passed 00:09:25.008 Test: blockdev writev readv size > 128k ...passed 00:09:25.008 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.008 Test: blockdev comparev and writev ...passed 00:09:25.008 Test: blockdev nvme passthru rw ...passed 00:09:25.008 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.008 Test: blockdev nvme admin passthru ...passed 00:09:25.008 Test: blockdev copy ...passed 00:09:25.008 Suite: bdevio tests on: Malloc0 00:09:25.008 Test: blockdev write read block ...passed 00:09:25.008 Test: blockdev write zeroes read block ...passed 00:09:25.008 Test: blockdev write zeroes read no split ...passed 00:09:25.008 Test: blockdev write zeroes read split ...passed 00:09:25.008 Test: blockdev write zeroes read split partial ...passed 00:09:25.008 Test: blockdev reset ...passed 00:09:25.008 Test: blockdev write read 8 blocks ...passed 00:09:25.008 Test: blockdev write read size > 128k ...passed 00:09:25.008 Test: blockdev write read invalid size ...passed 00:09:25.008 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.008 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.008 Test: blockdev write read max offset ...passed 00:09:25.008 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.008 Test: blockdev writev readv 8 blocks ...passed 00:09:25.008 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.008 Test: blockdev writev readv block ...passed 00:09:25.008 Test: blockdev writev readv size > 128k ...passed 00:09:25.008 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.008 Test: blockdev comparev and writev ...passed 00:09:25.008 Test: blockdev nvme passthru rw ...passed 00:09:25.008 Test: blockdev nvme passthru vendor specific ...passed 00:09:25.008 Test: blockdev nvme admin passthru ...passed 00:09:25.008 Test: blockdev copy ...passed 00:09:25.008 00:09:25.008 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.008 suites 16 16 n/a 0 0 00:09:25.008 tests 368 368 368 0 0 00:09:25.008 asserts 2224 2224 2224 0 n/a 00:09:25.008 00:09:25.008 Elapsed time = 3.320 seconds 00:09:25.008 0 00:09:25.008 20:36:08 -- bdev/blockdev.sh@293 -- # killprocess 44131 00:09:25.008 20:36:08 -- common/autotest_common.sh@926 -- # '[' -z 44131 ']' 00:09:25.008 20:36:08 -- common/autotest_common.sh@930 -- # kill -0 44131 00:09:25.008 20:36:08 -- common/autotest_common.sh@931 -- # uname 00:09:25.008 20:36:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:25.008 20:36:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 44131 00:09:25.268 killing process with pid 44131 00:09:25.268 20:36:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:25.268 20:36:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:25.268 20:36:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 44131' 00:09:25.268 20:36:08 -- common/autotest_common.sh@945 -- # kill 44131 00:09:25.268 20:36:08 -- common/autotest_common.sh@950 -- # wait 44131 00:09:27.168 ************************************ 00:09:27.168 END TEST bdev_bounds 00:09:27.168 ************************************ 00:09:27.168 20:36:10 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:27.168 00:09:27.168 real 0m4.908s 00:09:27.168 user 0m12.595s 00:09:27.168 sys 0m0.555s 00:09:27.168 20:36:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.168 20:36:10 -- common/autotest_common.sh@10 -- # set +x 00:09:27.168 20:36:10 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:09:27.168 20:36:10 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:27.168 20:36:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.168 20:36:10 -- common/autotest_common.sh@10 -- # set +x 00:09:27.168 ************************************ 00:09:27.168 START TEST bdev_nbd 00:09:27.168 ************************************ 00:09:27.168 20:36:10 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:09:27.168 20:36:10 -- bdev/blockdev.sh@298 -- # uname -s 00:09:27.168 20:36:10 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:27.168 20:36:10 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:27.168 20:36:10 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:27.168 20:36:10 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:09:27.168 20:36:10 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:27.168 20:36:10 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:09:27.168 20:36:10 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:27.168 20:36:10 -- bdev/blockdev.sh@307 -- # modprobe -q nbd nbds_max=16 00:09:27.168 ************************************ 00:09:27.168 END TEST bdev_nbd 00:09:27.168 ************************************ 00:09:27.168 20:36:10 -- bdev/blockdev.sh@307 -- # return 0 00:09:27.168 00:09:27.168 real 0m0.010s 00:09:27.168 user 0m0.003s 00:09:27.168 sys 0m0.007s 00:09:27.168 20:36:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.168 20:36:10 -- common/autotest_common.sh@10 -- # set +x 00:09:27.168 20:36:10 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:27.168 20:36:10 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:09:27.168 20:36:10 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:09:27.168 20:36:10 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:09:27.168 20:36:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:27.168 20:36:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.168 20:36:10 -- common/autotest_common.sh@10 -- # set +x 00:09:27.168 ************************************ 00:09:27.168 START TEST bdev_fio 00:09:27.168 ************************************ 00:09:27.168 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:09:27.168 20:36:10 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:09:27.168 20:36:10 -- bdev/blockdev.sh@329 -- # local env_context 00:09:27.168 20:36:10 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:09:27.169 20:36:10 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:09:27.169 20:36:10 -- bdev/blockdev.sh@337 -- # echo '' 00:09:27.169 20:36:10 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:09:27.169 20:36:10 -- bdev/blockdev.sh@337 -- # env_context= 00:09:27.169 20:36:10 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:09:27.169 20:36:10 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:09:27.169 20:36:10 -- common/autotest_common.sh@1260 -- # local workload=verify 00:09:27.169 20:36:10 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:09:27.169 20:36:10 -- common/autotest_common.sh@1262 -- # local env_context= 00:09:27.169 20:36:10 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:09:27.169 20:36:10 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:09:27.169 20:36:10 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:09:27.169 20:36:10 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:09:27.169 20:36:10 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:09:27.426 20:36:10 -- common/autotest_common.sh@1280 -- # cat 00:09:27.426 20:36:10 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:09:27.426 20:36:10 -- common/autotest_common.sh@1293 -- # cat 00:09:27.426 20:36:10 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:09:27.426 20:36:10 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:09:27.426 20:36:10 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:09:27.426 20:36:10 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:09:27.426 20:36:10 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:09:27.426 20:36:10 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:09:27.426 20:36:10 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:09:27.426 20:36:10 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:27.426 20:36:10 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:27.426 20:36:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.426 20:36:10 -- common/autotest_common.sh@10 -- # set +x 00:09:27.426 ************************************ 00:09:27.426 START TEST bdev_fio_rw_verify 00:09:27.426 ************************************ 00:09:27.426 20:36:10 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:27.426 20:36:10 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:27.426 20:36:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:09:27.426 20:36:10 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:09:27.426 20:36:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:09:27.426 20:36:10 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:09:27.426 20:36:10 -- common/autotest_common.sh@1320 -- # shift 00:09:27.426 20:36:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:09:27.426 20:36:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:09:27.426 20:36:10 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:09:27.426 20:36:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:09:27.426 20:36:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:09:27.426 20:36:10 -- common/autotest_common.sh@1324 -- # asan_lib=/lib64/libasan.so.6 00:09:27.426 20:36:10 -- common/autotest_common.sh@1325 -- # [[ -n /lib64/libasan.so.6 ]] 00:09:27.426 20:36:10 -- common/autotest_common.sh@1326 -- # break 00:09:27.426 20:36:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:09:27.426 20:36:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:27.684 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:27.684 fio-3.35 00:09:27.684 Starting 16 threads 00:09:39.870 00:09:39.870 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=44308: Mon Apr 15 20:36:22 2024 00:09:39.870 read: IOPS=101k, BW=394MiB/s (413MB/s)(3944MiB/10007msec) 00:09:39.870 slat (nsec): min=687, max=42026k, avg=12778.73, stdev=172575.82 00:09:39.870 clat (usec): min=3, max=47818, avg=139.08, stdev=599.43 00:09:39.871 lat (usec): min=10, max=47820, avg=151.86, stdev=623.61 00:09:39.871 clat percentiles (usec): 00:09:39.871 | 50.000th=[ 87], 99.000th=[ 734], 99.900th=[12649], 99.990th=[20055], 00:09:39.871 | 99.999th=[41157] 00:09:39.871 write: IOPS=162k, BW=631MiB/s (662MB/s)(6302MiB/9980msec); 0 zone resets 00:09:39.871 slat (usec): min=2, max=160551, avg=61.71, stdev=1059.40 00:09:39.871 clat (usec): min=4, max=152463, avg=319.86, stdev=2029.15 00:09:39.871 lat (usec): min=19, max=161053, avg=381.57, stdev=2290.48 00:09:39.871 clat percentiles (usec): 00:09:39.871 | 50.000th=[ 133], 99.000th=[ 6587], 99.900th=[ 30802], 00:09:39.871 | 99.990th=[ 68682], 99.999th=[114820] 00:09:39.871 bw ( KiB/s): min=429704, max=930558, per=98.82%, avg=639027.00, stdev=8687.18, samples=304 00:09:39.871 iops : min=107421, max=232634, avg=159752.84, stdev=2171.79, samples=304 00:09:39.871 lat (usec) : 4=0.01%, 10=0.01%, 20=0.36%, 50=11.09%, 100=32.69% 00:09:39.871 lat (usec) : 250=47.99%, 500=4.44%, 750=1.97%, 1000=0.31% 00:09:39.871 lat (msec) : 2=0.17%, 4=0.14%, 10=0.34%, 20=0.36%, 50=0.10% 00:09:39.871 lat (msec) : 100=0.02%, 250=0.01% 00:09:39.871 cpu : usr=54.24%, sys=1.10%, ctx=19265, majf=0, minf=111769 00:09:39.871 IO depths : 1=12.4%, 2=24.7%, 4=50.2%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:39.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.871 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.871 issued rwts: total=1009537,1613392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.871 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:39.871 00:09:39.871 Run status group 0 (all jobs): 00:09:39.871 READ: bw=394MiB/s (413MB/s), 394MiB/s-394MiB/s (413MB/s-413MB/s), io=3944MiB (4135MB), run=10007-10007msec 00:09:39.871 WRITE: bw=631MiB/s (662MB/s), 631MiB/s-631MiB/s (662MB/s-662MB/s), io=6302MiB (6608MB), run=9980-9980msec 00:09:43.159 ----------------------------------------------------- 00:09:43.159 Suppressions used: 00:09:43.159 count bytes template 00:09:43.159 16 140 /usr/src/fio/parse.c 00:09:43.159 11724 1125504 /usr/src/fio/iolog.c 00:09:43.159 2 596 libcrypto.so 00:09:43.159 ----------------------------------------------------- 00:09:43.159 00:09:43.159 ************************************ 00:09:43.159 END TEST bdev_fio_rw_verify 00:09:43.159 ************************************ 00:09:43.159 00:09:43.159 real 0m15.107s 00:09:43.159 user 1m33.607s 00:09:43.159 sys 0m2.353s 00:09:43.159 20:36:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.159 20:36:25 -- common/autotest_common.sh@10 -- # set +x 00:09:43.159 20:36:26 -- bdev/blockdev.sh@348 -- # rm -f 00:09:43.159 20:36:26 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:09:43.159 20:36:26 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:09:43.159 20:36:26 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:09:43.159 20:36:26 -- common/autotest_common.sh@1260 -- # local workload=trim 00:09:43.159 20:36:26 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:09:43.159 20:36:26 -- common/autotest_common.sh@1262 -- # local env_context= 00:09:43.159 20:36:26 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:09:43.159 20:36:26 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:09:43.159 20:36:26 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:09:43.159 20:36:26 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:09:43.159 20:36:26 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:09:43.159 20:36:26 -- common/autotest_common.sh@1280 -- # cat 00:09:43.159 20:36:26 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:09:43.159 20:36:26 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:09:43.159 20:36:26 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:09:43.159 20:36:26 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:09:43.160 20:36:26 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "59a74dca-3346-4c44-ace2-b3011241ebb6"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "59a74dca-3346-4c44-ace2-b3011241ebb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0e5a6515-2578-5dea-b0b1-5b807f7add5b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0e5a6515-2578-5dea-b0b1-5b807f7add5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b8eff89b-eab8-5eba-86e3-f937175ef6be"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b8eff89b-eab8-5eba-86e3-f937175ef6be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b7064883-40b8-5375-a8d7-5a6c46885ecc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b7064883-40b8-5375-a8d7-5a6c46885ecc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "26f08935-8efc-5efb-9779-b9684b9fbe99"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "26f08935-8efc-5efb-9779-b9684b9fbe99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "094ebdb5-5980-5c61-94fe-b160668d82f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "094ebdb5-5980-5c61-94fe-b160668d82f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "227e057b-9753-50d2-9b8c-3f0d96eb6133"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "227e057b-9753-50d2-9b8c-3f0d96eb6133",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "59341c11-4a7f-5946-9fdd-5cb4d013c559"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "59341c11-4a7f-5946-9fdd-5cb4d013c559",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "d483fc83-bef3-5c88-a5b3-45d69a406c98"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d483fc83-bef3-5c88-a5b3-45d69a406c98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "06094ccb-6b2c-5f2e-8bc2-33ca2a56d15f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06094ccb-6b2c-5f2e-8bc2-33ca2a56d15f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "388bf8e8-84e1-59a2-9a65-3f77b31c0f3b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "388bf8e8-84e1-59a2-9a65-3f77b31c0f3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "e2e3e826-ba8f-5907-bbc1-febcb4e64ca4"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e2e3e826-ba8f-5907-bbc1-febcb4e64ca4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c48276d1-7ea3-4da5-8768-77cf170fc77e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c48276d1-7ea3-4da5-8768-77cf170fc77e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c48276d1-7ea3-4da5-8768-77cf170fc77e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "d498323b-3752-4a6d-b3eb-8ed5b0c62e1e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "931eb6ce-d6e6-412f-afb2-45c909211eee",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "ce526ab6-1d6e-4a60-abab-e30019fd6f49"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ce526ab6-1d6e-4a60-abab-e30019fd6f49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce526ab6-1d6e-4a60-abab-e30019fd6f49",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "231964b0-be82-4f31-9ca8-b7a5f4215df2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "1e9eb1a3-53f3-4de1-96ce-d5a8acd4d083",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a0555647-ebe4-4ccc-865b-89abd1cb142c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a0555647-ebe4-4ccc-865b-89abd1cb142c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a0555647-ebe4-4ccc-865b-89abd1cb142c",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "44d03e7e-fd0a-45a4-b022-341ffab2b88f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c33fd0f3-88ea-4839-800b-4c21b00a1a86",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5adc136b-42c3-47f6-b932-c50851bebf20"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5adc136b-42c3-47f6-b932-c50851bebf20",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:09:43.160 20:36:26 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:09:43.160 Malloc1p0 00:09:43.160 Malloc1p1 00:09:43.160 Malloc2p0 00:09:43.160 Malloc2p1 00:09:43.160 Malloc2p2 00:09:43.160 Malloc2p3 00:09:43.160 Malloc2p4 00:09:43.160 Malloc2p5 00:09:43.160 Malloc2p6 00:09:43.160 Malloc2p7 00:09:43.160 TestPT 00:09:43.160 raid0 00:09:43.160 concat0 ]] 00:09:43.160 20:36:26 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "59a74dca-3346-4c44-ace2-b3011241ebb6"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "59a74dca-3346-4c44-ace2-b3011241ebb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "0e5a6515-2578-5dea-b0b1-5b807f7add5b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0e5a6515-2578-5dea-b0b1-5b807f7add5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b8eff89b-eab8-5eba-86e3-f937175ef6be"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b8eff89b-eab8-5eba-86e3-f937175ef6be",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b7064883-40b8-5375-a8d7-5a6c46885ecc"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b7064883-40b8-5375-a8d7-5a6c46885ecc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "26f08935-8efc-5efb-9779-b9684b9fbe99"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "26f08935-8efc-5efb-9779-b9684b9fbe99",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "094ebdb5-5980-5c61-94fe-b160668d82f1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "094ebdb5-5980-5c61-94fe-b160668d82f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "227e057b-9753-50d2-9b8c-3f0d96eb6133"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "227e057b-9753-50d2-9b8c-3f0d96eb6133",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "59341c11-4a7f-5946-9fdd-5cb4d013c559"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "59341c11-4a7f-5946-9fdd-5cb4d013c559",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "d483fc83-bef3-5c88-a5b3-45d69a406c98"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d483fc83-bef3-5c88-a5b3-45d69a406c98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "06094ccb-6b2c-5f2e-8bc2-33ca2a56d15f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06094ccb-6b2c-5f2e-8bc2-33ca2a56d15f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "388bf8e8-84e1-59a2-9a65-3f77b31c0f3b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "388bf8e8-84e1-59a2-9a65-3f77b31c0f3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "e2e3e826-ba8f-5907-bbc1-febcb4e64ca4"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "e2e3e826-ba8f-5907-bbc1-febcb4e64ca4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c48276d1-7ea3-4da5-8768-77cf170fc77e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c48276d1-7ea3-4da5-8768-77cf170fc77e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c48276d1-7ea3-4da5-8768-77cf170fc77e",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "d498323b-3752-4a6d-b3eb-8ed5b0c62e1e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "931eb6ce-d6e6-412f-afb2-45c909211eee",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "ce526ab6-1d6e-4a60-abab-e30019fd6f49"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ce526ab6-1d6e-4a60-abab-e30019fd6f49",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce526ab6-1d6e-4a60-abab-e30019fd6f49",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "231964b0-be82-4f31-9ca8-b7a5f4215df2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "1e9eb1a3-53f3-4de1-96ce-d5a8acd4d083",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a0555647-ebe4-4ccc-865b-89abd1cb142c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a0555647-ebe4-4ccc-865b-89abd1cb142c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a0555647-ebe4-4ccc-865b-89abd1cb142c",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "44d03e7e-fd0a-45a4-b022-341ffab2b88f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c33fd0f3-88ea-4839-800b-4c21b00a1a86",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "5adc136b-42c3-47f6-b932-c50851bebf20"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "5adc136b-42c3-47f6-b932-c50851bebf20",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:09:43.161 20:36:26 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:43.161 20:36:26 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:09:43.161 20:36:26 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:09:43.161 20:36:26 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:43.161 20:36:26 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:09:43.161 20:36:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:43.161 20:36:26 -- common/autotest_common.sh@10 -- # set +x 00:09:43.161 ************************************ 00:09:43.161 START TEST bdev_fio_trim 00:09:43.161 ************************************ 00:09:43.161 20:36:26 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:43.161 20:36:26 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:43.161 20:36:26 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:09:43.161 20:36:26 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:09:43.161 20:36:26 -- common/autotest_common.sh@1318 -- # local sanitizers 00:09:43.161 20:36:26 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:09:43.161 20:36:26 -- common/autotest_common.sh@1320 -- # shift 00:09:43.161 20:36:26 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:09:43.161 20:36:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:09:43.161 20:36:26 -- common/autotest_common.sh@1324 -- # grep libasan 00:09:43.161 20:36:26 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:09:43.161 20:36:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:09:43.161 20:36:26 -- common/autotest_common.sh@1324 -- # asan_lib=/lib64/libasan.so.6 00:09:43.162 20:36:26 -- common/autotest_common.sh@1325 -- # [[ -n /lib64/libasan.so.6 ]] 00:09:43.162 20:36:26 -- common/autotest_common.sh@1326 -- # break 00:09:43.162 20:36:26 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:09:43.162 20:36:26 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:09:43.162 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:43.162 fio-3.35 00:09:43.162 Starting 14 threads 00:09:55.413 00:09:55.413 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=44548: Mon Apr 15 20:36:38 2024 00:09:55.413 write: IOPS=323k, BW=1261MiB/s (1322MB/s)(12.3GiB/10001msec); 0 zone resets 00:09:55.413 slat (nsec): min=990, max=31026k, avg=13237.60, stdev=220571.23 00:09:55.413 clat (usec): min=7, max=51048, avg=130.06, stdev=776.37 00:09:55.413 lat (usec): min=10, max=51053, avg=143.30, stdev=806.72 00:09:55.413 clat percentiles (usec): 00:09:55.413 | 50.000th=[ 75], 99.000th=[ 717], 99.900th=[13304], 99.990th=[23200], 00:09:55.413 | 99.999th=[33817] 00:09:55.413 bw ( MiB/s): min= 905, max= 1829, per=98.85%, avg=1246.16, stdev=24.78, samples=266 00:09:55.413 iops : min=231732, max=468316, avg=319013.53, stdev=6343.23, samples=266 00:09:55.413 trim: IOPS=323k, BW=1261MiB/s (1322MB/s)(12.3GiB/10001msec); 0 zone resets 00:09:55.413 slat (nsec): min=1539, max=34034k, avg=9557.74, stdev=187904.88 00:09:55.413 clat (nsec): min=1877, max=51054k, avg=112309.37, stdev=649168.58 00:09:55.413 lat (usec): min=6, max=51057, avg=121.87, stdev=675.95 00:09:55.413 clat percentiles (usec): 00:09:55.413 | 50.000th=[ 83], 99.000th=[ 133], 99.900th=[13042], 99.990th=[22152], 00:09:55.413 | 99.999th=[29230] 00:09:55.413 bw ( MiB/s): min= 905, max= 1829, per=98.85%, avg=1246.17, stdev=24.78, samples=266 00:09:55.413 iops : min=231732, max=468332, avg=319014.47, stdev=6343.30, samples=266 00:09:55.413 lat (usec) : 2=0.01%, 4=0.01%, 10=0.28%, 20=0.64%, 50=10.18% 00:09:55.413 lat (usec) : 100=65.62%, 250=21.44%, 500=0.86%, 750=0.55%, 1000=0.14% 00:09:55.413 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.23%, 50=0.02% 00:09:55.413 lat (msec) : 100=0.01% 00:09:55.413 cpu : usr=71.64%, sys=0.00%, ctx=6264, majf=0, minf=877 00:09:55.413 IO depths : 1=12.2%, 2=24.6%, 4=50.1%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.413 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.413 issued rwts: total=0,3227751,3227755,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.413 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:55.413 00:09:55.413 Run status group 0 (all jobs): 00:09:55.413 WRITE: bw=1261MiB/s (1322MB/s), 1261MiB/s-1261MiB/s (1322MB/s-1322MB/s), io=12.3GiB (13.2GB), run=10001-10001msec 00:09:55.413 TRIM: bw=1261MiB/s (1322MB/s), 1261MiB/s-1261MiB/s (1322MB/s-1322MB/s), io=12.3GiB (13.2GB), run=10001-10001msec 00:09:58.714 ----------------------------------------------------- 00:09:58.714 Suppressions used: 00:09:58.714 count bytes template 00:09:58.714 14 129 /usr/src/fio/parse.c 00:09:58.714 2 596 libcrypto.so 00:09:58.714 ----------------------------------------------------- 00:09:58.714 00:09:58.714 00:09:58.714 real 0m15.492s 00:09:58.714 user 1m51.082s 00:09:58.714 sys 0m0.447s 00:09:58.714 20:36:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.714 20:36:41 -- common/autotest_common.sh@10 -- # set +x 00:09:58.714 ************************************ 00:09:58.714 END TEST bdev_fio_trim 00:09:58.714 ************************************ 00:09:58.714 20:36:41 -- bdev/blockdev.sh@366 -- # rm -f 00:09:58.714 20:36:41 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:09:58.714 /home/vagrant/spdk_repo/spdk 00:09:58.714 20:36:41 -- bdev/blockdev.sh@368 -- # popd 00:09:58.714 20:36:41 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:09:58.714 00:09:58.714 real 0m31.062s 00:09:58.714 user 3m24.893s 00:09:58.714 sys 0m2.956s 00:09:58.714 20:36:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.714 ************************************ 00:09:58.714 END TEST bdev_fio 00:09:58.714 ************************************ 00:09:58.714 20:36:41 -- common/autotest_common.sh@10 -- # set +x 00:09:58.714 20:36:41 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:58.714 20:36:41 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:58.714 20:36:41 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:09:58.714 20:36:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.714 20:36:41 -- common/autotest_common.sh@10 -- # set +x 00:09:58.714 ************************************ 00:09:58.714 START TEST bdev_verify 00:09:58.714 ************************************ 00:09:58.714 20:36:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:58.714 [2024-04-15 20:36:41.933311] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:09:58.714 [2024-04-15 20:36:41.933469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44757 ] 00:09:58.714 [2024-04-15 20:36:42.080372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:58.972 [2024-04-15 20:36:42.288419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.972 [2024-04-15 20:36:42.288440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.541 [2024-04-15 20:36:42.748555] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:59.541 [2024-04-15 20:36:42.748662] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:59.541 [2024-04-15 20:36:42.756494] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:59.541 [2024-04-15 20:36:42.756562] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:59.541 [2024-04-15 20:36:42.764518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:59.541 [2024-04-15 20:36:42.764556] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:59.541 [2024-04-15 20:36:42.764583] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:59.541 [2024-04-15 20:36:42.951106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:59.541 [2024-04-15 20:36:42.951223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.541 [2024-04-15 20:36:42.951280] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:09:59.541 [2024-04-15 20:36:42.951302] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.541 [2024-04-15 20:36:42.952940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.541 [2024-04-15 20:36:42.952984] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:00.108 Running I/O for 5 seconds... 00:10:05.381 00:10:05.381 Latency(us) 00:10:05.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.381 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x1000 00:10:05.381 Malloc0 : 5.07 4051.71 15.83 0.00 0.00 31497.87 812.62 65272.80 00:10:05.381 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x1000 length 0x1000 00:10:05.381 Malloc0 : 5.06 3956.18 15.45 0.00 0.00 32194.88 740.24 90960.81 00:10:05.381 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x800 00:10:05.381 Malloc1p0 : 5.07 2733.42 10.68 0.00 0.00 46682.76 1776.58 60640.54 00:10:05.381 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x800 length 0x800 00:10:05.381 Malloc1p0 : 5.06 2687.25 10.50 0.00 0.00 47375.97 1763.42 57271.62 00:10:05.381 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x800 00:10:05.381 Malloc1p1 : 5.08 2733.17 10.68 0.00 0.00 46654.60 1816.06 58534.97 00:10:05.381 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x800 length 0x800 00:10:05.381 Malloc1p1 : 5.07 2687.06 10.50 0.00 0.00 47346.69 1776.58 56429.39 00:10:05.381 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x200 00:10:05.381 Malloc2p0 : 5.08 2732.99 10.68 0.00 0.00 46625.69 1750.26 56850.51 00:10:05.381 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x200 length 0x200 00:10:05.381 Malloc2p0 : 5.08 2699.36 10.54 0.00 0.00 47228.63 1684.46 55587.16 00:10:05.381 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x200 00:10:05.381 Malloc2p1 : 5.08 2732.78 10.67 0.00 0.00 46598.00 1710.78 55166.05 00:10:05.381 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x200 length 0x200 00:10:05.381 Malloc2p1 : 5.08 2699.15 10.54 0.00 0.00 47202.62 1677.88 55587.16 00:10:05.381 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x200 00:10:05.381 Malloc2p2 : 5.08 2732.59 10.67 0.00 0.00 46563.38 2118.73 52639.36 00:10:05.381 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x200 length 0x200 00:10:05.381 Malloc2p2 : 5.08 2698.94 10.54 0.00 0.00 47174.76 2092.41 49691.55 00:10:05.381 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x200 00:10:05.381 Malloc2p3 : 5.08 2732.37 10.67 0.00 0.00 46527.96 1829.22 50323.23 00:10:05.381 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x200 length 0x200 00:10:05.381 Malloc2p3 : 5.08 2698.72 10.54 0.00 0.00 47136.99 1855.54 48217.65 00:10:05.381 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x200 00:10:05.381 Malloc2p4 : 5.08 2732.15 10.67 0.00 0.00 46503.67 1750.26 48638.77 00:10:05.381 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x200 length 0x200 00:10:05.381 Malloc2p4 : 5.08 2698.51 10.54 0.00 0.00 47106.11 1723.94 50533.78 00:10:05.381 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x200 00:10:05.381 Malloc2p5 : 5.08 2731.95 10.67 0.00 0.00 46467.26 1842.38 46533.19 00:10:05.381 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x200 length 0x200 00:10:05.381 Malloc2p5 : 5.08 2698.27 10.54 0.00 0.00 47080.01 1842.38 51797.13 00:10:05.381 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x200 00:10:05.381 Malloc2p6 : 5.08 2731.71 10.67 0.00 0.00 46439.75 1776.58 44848.73 00:10:05.381 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x200 length 0x200 00:10:05.381 Malloc2p6 : 5.08 2698.01 10.54 0.00 0.00 47048.62 1789.74 53481.59 00:10:05.381 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x200 00:10:05.381 Malloc2p7 : 5.08 2731.45 10.67 0.00 0.00 46413.93 1868.70 42743.16 00:10:05.381 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x200 length 0x200 00:10:05.381 Malloc2p7 : 5.08 2697.76 10.54 0.00 0.00 47018.63 1855.54 54323.82 00:10:05.381 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x1000 00:10:05.381 TestPT : 5.08 2713.91 10.60 0.00 0.00 46652.24 5263.94 42953.72 00:10:05.381 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x1000 length 0x1000 00:10:05.381 TestPT : 5.08 2663.61 10.40 0.00 0.00 47568.81 4842.82 69062.84 00:10:05.381 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x2000 00:10:05.381 raid0 : 5.08 2730.93 10.67 0.00 0.00 46330.94 1723.94 36005.32 00:10:05.381 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x2000 length 0x2000 00:10:05.381 raid0 : 5.08 2697.17 10.54 0.00 0.00 46937.65 1651.56 54744.93 00:10:05.381 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x2000 00:10:05.381 concat0 : 5.08 2744.86 10.72 0.00 0.00 46105.40 1631.82 34531.42 00:10:05.381 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x2000 length 0x2000 00:10:05.381 concat0 : 5.08 2696.95 10.53 0.00 0.00 46909.47 1789.74 55587.16 00:10:05.381 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x1000 00:10:05.381 raid1 : 5.08 2744.72 10.72 0.00 0.00 46081.31 1085.69 34952.53 00:10:05.381 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x1000 length 0x1000 00:10:05.381 raid1 : 5.08 2696.71 10.53 0.00 0.00 46881.77 1934.50 56008.28 00:10:05.381 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x0 length 0x4e2 00:10:05.381 AIO0 : 5.08 2738.32 10.70 0.00 0.00 46151.04 750.11 35584.21 00:10:05.381 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:05.381 Verification LBA range: start 0x4e2 length 0x4e2 00:10:05.381 AIO0 : 5.08 2684.55 10.49 0.00 0.00 47050.05 1454.16 56850.51 00:10:05.381 =================================================================================================================== 00:10:05.381 Total : 89407.21 349.25 0.00 0.00 45455.35 740.24 90960.81 00:10:07.918 ************************************ 00:10:07.918 END TEST bdev_verify 00:10:07.918 ************************************ 00:10:07.918 00:10:07.918 real 0m9.096s 00:10:07.918 user 0m16.444s 00:10:07.918 sys 0m0.596s 00:10:07.918 20:36:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.918 20:36:50 -- common/autotest_common.sh@10 -- # set +x 00:10:07.918 20:36:50 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:07.918 20:36:50 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:07.918 20:36:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:07.918 20:36:50 -- common/autotest_common.sh@10 -- # set +x 00:10:07.918 ************************************ 00:10:07.918 START TEST bdev_verify_big_io 00:10:07.918 ************************************ 00:10:07.918 20:36:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:07.918 [2024-04-15 20:36:51.091225] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:10:07.918 [2024-04-15 20:36:51.091378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44887 ] 00:10:07.918 [2024-04-15 20:36:51.237005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:08.177 [2024-04-15 20:36:51.449773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.177 [2024-04-15 20:36:51.449777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.462 [2024-04-15 20:36:51.900129] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:08.462 [2024-04-15 20:36:51.900211] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:08.462 [2024-04-15 20:36:51.908101] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:08.462 [2024-04-15 20:36:51.908174] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:08.462 [2024-04-15 20:36:51.916123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:08.462 [2024-04-15 20:36:51.916156] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:08.462 [2024-04-15 20:36:51.916182] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:08.735 [2024-04-15 20:36:52.101158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:08.735 [2024-04-15 20:36:52.101250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.735 [2024-04-15 20:36:52.101303] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:10:08.735 [2024-04-15 20:36:52.101324] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.735 [2024-04-15 20:36:52.102806] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.735 [2024-04-15 20:36:52.102845] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:08.994 [2024-04-15 20:36:52.449850] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.453326] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.457086] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.460802] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.464093] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.467813] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.471417] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.475099] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.478454] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.482133] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.485409] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:10:08.994 [2024-04-15 20:36:52.489302] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:10:09.253 [2024-04-15 20:36:52.493127] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:10:09.253 [2024-04-15 20:36:52.496489] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:10:09.253 [2024-04-15 20:36:52.500367] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:10:09.253 [2024-04-15 20:36:52.503557] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:10:09.253 [2024-04-15 20:36:52.590125] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:10:09.253 [2024-04-15 20:36:52.597006] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:10:09.253 Running I/O for 5 seconds... 00:10:15.820 00:10:15.820 Latency(us) 00:10:15.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:15.820 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x100 00:10:15.820 Malloc0 : 5.30 860.46 53.78 0.00 0.00 146939.77 10317.31 448066.21 00:10:15.820 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x100 length 0x100 00:10:15.820 Malloc0 : 5.26 816.56 51.04 0.00 0.00 154157.02 9633.00 522182.43 00:10:15.820 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x80 00:10:15.820 Malloc1p0 : 5.36 459.01 28.69 0.00 0.00 272233.00 20634.63 539027.02 00:10:15.820 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x80 length 0x80 00:10:15.820 Malloc1p0 : 5.27 574.10 35.88 0.00 0.00 218282.02 20424.07 468279.72 00:10:15.820 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x80 00:10:15.820 Malloc1p1 : 5.41 244.47 15.28 0.00 0.00 506221.12 21687.42 970248.64 00:10:15.820 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x80 length 0x80 00:10:15.820 Malloc1p1 : 5.37 239.73 14.98 0.00 0.00 516119.91 19897.68 1003937.82 00:10:15.820 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x20 00:10:15.820 Malloc2p0 : 5.30 150.36 9.40 0.00 0.00 205772.43 3579.48 355420.94 00:10:15.820 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x20 length 0x20 00:10:15.820 Malloc2p0 : 5.30 150.34 9.40 0.00 0.00 206463.28 3500.52 309940.54 00:10:15.820 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x20 00:10:15.820 Malloc2p1 : 5.30 150.35 9.40 0.00 0.00 205376.32 3842.67 348683.10 00:10:15.820 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x20 length 0x20 00:10:15.820 Malloc2p1 : 5.30 150.32 9.40 0.00 0.00 206071.36 3790.03 303202.70 00:10:15.820 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x20 00:10:15.820 Malloc2p2 : 5.30 150.34 9.40 0.00 0.00 204958.24 3763.71 341945.27 00:10:15.820 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x20 length 0x20 00:10:15.820 Malloc2p2 : 5.30 150.30 9.39 0.00 0.00 205685.18 3553.16 294780.40 00:10:15.820 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x20 00:10:15.820 Malloc2p3 : 5.30 150.32 9.39 0.00 0.00 204561.46 3921.63 335207.43 00:10:15.820 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x20 length 0x20 00:10:15.820 Malloc2p3 : 5.30 150.29 9.39 0.00 0.00 205311.16 3816.35 288042.56 00:10:15.820 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x20 00:10:15.820 Malloc2p4 : 5.30 150.30 9.39 0.00 0.00 204146.21 4158.51 326785.13 00:10:15.820 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x20 length 0x20 00:10:15.820 Malloc2p4 : 5.30 150.28 9.39 0.00 0.00 204842.00 3658.44 282989.19 00:10:15.820 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x20 00:10:15.820 Malloc2p5 : 5.33 153.20 9.58 0.00 0.00 200391.70 4184.83 321731.75 00:10:15.820 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x20 length 0x20 00:10:15.820 Malloc2p5 : 5.30 150.27 9.39 0.00 0.00 204480.59 3395.24 276251.35 00:10:15.820 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x20 00:10:15.820 Malloc2p6 : 5.33 153.19 9.57 0.00 0.00 200021.42 3842.67 313309.46 00:10:15.820 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x20 length 0x20 00:10:15.820 Malloc2p6 : 5.30 150.25 9.39 0.00 0.00 204041.10 3145.20 269513.51 00:10:15.820 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x20 00:10:15.820 Malloc2p7 : 5.33 153.18 9.57 0.00 0.00 199561.31 4737.54 303202.70 00:10:15.820 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x20 length 0x20 00:10:15.820 Malloc2p7 : 5.30 150.24 9.39 0.00 0.00 203659.89 2908.32 264460.13 00:10:15.820 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x100 00:10:15.820 TestPT : 5.38 252.33 15.77 0.00 0.00 480392.44 13159.84 976986.47 00:10:15.820 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x100 length 0x100 00:10:15.820 TestPT : 5.40 232.81 14.55 0.00 0.00 519305.98 26740.79 1010675.66 00:10:15.820 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x200 00:10:15.820 raid0 : 5.43 256.55 16.03 0.00 0.00 467813.15 20950.46 970248.64 00:10:15.820 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x200 length 0x200 00:10:15.820 raid0 : 5.40 251.31 15.71 0.00 0.00 480427.51 21582.14 1003937.82 00:10:15.820 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x200 00:10:15.820 concat0 : 5.41 264.12 16.51 0.00 0.00 451650.18 16318.20 970248.64 00:10:15.820 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x200 length 0x200 00:10:15.820 concat0 : 5.40 257.67 16.10 0.00 0.00 465251.45 17160.43 1003937.82 00:10:15.820 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x100 00:10:15.820 raid1 : 5.41 286.62 17.91 0.00 0.00 413805.43 9053.97 970248.64 00:10:15.820 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x100 length 0x100 00:10:15.820 raid1 : 5.40 271.44 16.96 0.00 0.00 439036.28 11738.58 997199.99 00:10:15.820 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x0 length 0x4e 00:10:15.820 AIO0 : 5.43 299.16 18.70 0.00 0.00 239851.61 641.54 555871.61 00:10:15.820 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:10:15.820 Verification LBA range: start 0x4e length 0x4e 00:10:15.820 AIO0 : 5.40 278.56 17.41 0.00 0.00 259233.93 746.82 569347.29 00:10:15.820 =================================================================================================================== 00:10:15.820 Total : 8258.45 516.15 0.00 0.00 285239.26 641.54 1010675.66 00:10:17.742 00:10:17.742 real 0m9.941s 00:10:17.742 user 0m18.250s 00:10:17.742 sys 0m0.480s 00:10:17.742 20:37:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.742 20:37:00 -- common/autotest_common.sh@10 -- # set +x 00:10:17.742 ************************************ 00:10:17.742 END TEST bdev_verify_big_io 00:10:17.742 ************************************ 00:10:17.742 20:37:00 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.742 20:37:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:17.742 20:37:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:17.742 20:37:00 -- common/autotest_common.sh@10 -- # set +x 00:10:17.742 ************************************ 00:10:17.742 START TEST bdev_write_zeroes 00:10:17.742 ************************************ 00:10:17.742 20:37:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.742 [2024-04-15 20:37:01.100714] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:10:17.742 [2024-04-15 20:37:01.100880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45028 ] 00:10:18.001 [2024-04-15 20:37:01.256598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.001 [2024-04-15 20:37:01.464935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.569 [2024-04-15 20:37:01.904850] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:18.569 [2024-04-15 20:37:01.904929] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:18.569 [2024-04-15 20:37:01.912794] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:18.569 [2024-04-15 20:37:01.912850] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:18.569 [2024-04-15 20:37:01.920823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:18.569 [2024-04-15 20:37:01.920866] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:18.569 [2024-04-15 20:37:01.920891] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:18.829 [2024-04-15 20:37:02.116458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:18.829 [2024-04-15 20:37:02.116554] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.829 [2024-04-15 20:37:02.116607] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:10:18.829 [2024-04-15 20:37:02.116636] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.829 [2024-04-15 20:37:02.118402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.829 [2024-04-15 20:37:02.118455] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:19.121 Running I/O for 1 seconds... 00:10:20.072 00:10:20.072 Latency(us) 00:10:20.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.072 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc0 : 1.01 14791.42 57.78 0.00 0.00 8650.57 199.87 13475.68 00:10:20.072 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc1p0 : 1.01 14784.44 57.75 0.00 0.00 8649.19 276.36 13212.48 00:10:20.072 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc1p1 : 1.01 14779.55 57.73 0.00 0.00 8646.20 301.03 12686.09 00:10:20.072 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc2p0 : 1.01 14775.00 57.71 0.00 0.00 8642.57 297.74 12422.89 00:10:20.072 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc2p1 : 1.01 14770.67 57.70 0.00 0.00 8638.39 304.32 12054.41 00:10:20.072 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc2p2 : 1.01 14766.30 57.68 0.00 0.00 8635.80 284.58 11791.22 00:10:20.072 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc2p3 : 1.01 14761.91 57.66 0.00 0.00 8632.58 278.00 11528.02 00:10:20.072 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc2p4 : 1.01 14757.46 57.65 0.00 0.00 8628.75 278.00 11264.82 00:10:20.072 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc2p5 : 1.02 14753.14 57.63 0.00 0.00 8624.48 291.16 11001.63 00:10:20.072 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc2p6 : 1.02 14748.84 57.61 0.00 0.00 8622.72 273.07 10738.43 00:10:20.072 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 Malloc2p7 : 1.02 14744.41 57.60 0.00 0.00 8618.52 274.71 10527.87 00:10:20.072 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 TestPT : 1.02 14739.94 57.58 0.00 0.00 8616.88 292.81 10264.67 00:10:20.072 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 raid0 : 1.02 14734.34 57.56 0.00 0.00 8611.02 473.75 9790.92 00:10:20.072 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 concat0 : 1.02 14728.65 57.53 0.00 0.00 8605.63 483.62 9369.81 00:10:20.072 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 raid1 : 1.02 14813.16 57.86 0.00 0.00 8544.45 789.59 9106.61 00:10:20.072 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:20.072 AIO0 : 1.02 14736.50 57.56 0.00 0.00 8574.48 750.11 9843.56 00:10:20.072 =================================================================================================================== 00:10:20.072 Total : 236185.72 922.60 0.00 0.00 8621.34 199.87 13475.68 00:10:22.605 00:10:22.605 real 0m4.973s 00:10:22.605 user 0m4.345s 00:10:22.605 sys 0m0.416s 00:10:22.605 20:37:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.605 20:37:05 -- common/autotest_common.sh@10 -- # set +x 00:10:22.605 ************************************ 00:10:22.605 END TEST bdev_write_zeroes 00:10:22.605 ************************************ 00:10:22.605 20:37:05 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:22.605 20:37:05 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:22.605 20:37:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:22.605 20:37:05 -- common/autotest_common.sh@10 -- # set +x 00:10:22.605 ************************************ 00:10:22.605 START TEST bdev_json_nonenclosed 00:10:22.605 ************************************ 00:10:22.605 20:37:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:22.862 [2024-04-15 20:37:06.141349] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:10:22.862 [2024-04-15 20:37:06.141502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45114 ] 00:10:22.862 [2024-04-15 20:37:06.286074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.119 [2024-04-15 20:37:06.486207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.119 [2024-04-15 20:37:06.486410] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:23.119 [2024-04-15 20:37:06.486444] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:23.686 ************************************ 00:10:23.686 END TEST bdev_json_nonenclosed 00:10:23.686 ************************************ 00:10:23.686 00:10:23.686 real 0m0.897s 00:10:23.686 user 0m0.582s 00:10:23.686 sys 0m0.119s 00:10:23.686 20:37:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.686 20:37:06 -- common/autotest_common.sh@10 -- # set +x 00:10:23.686 20:37:06 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:23.686 20:37:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:23.686 20:37:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:23.686 20:37:06 -- common/autotest_common.sh@10 -- # set +x 00:10:23.686 ************************************ 00:10:23.686 START TEST bdev_json_nonarray 00:10:23.686 ************************************ 00:10:23.686 20:37:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:23.686 [2024-04-15 20:37:07.098338] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:10:23.686 [2024-04-15 20:37:07.098497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45152 ] 00:10:23.945 [2024-04-15 20:37:07.255744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.204 [2024-04-15 20:37:07.463719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.204 [2024-04-15 20:37:07.463931] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:24.204 [2024-04-15 20:37:07.463964] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:24.464 00:10:24.464 real 0m0.909s 00:10:24.464 user 0m0.602s 00:10:24.464 sys 0m0.110s 00:10:24.464 20:37:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.464 ************************************ 00:10:24.464 END TEST bdev_json_nonarray 00:10:24.464 ************************************ 00:10:24.464 20:37:07 -- common/autotest_common.sh@10 -- # set +x 00:10:24.464 20:37:07 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:10:24.464 20:37:07 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:10:24.464 20:37:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:24.464 20:37:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.464 20:37:07 -- common/autotest_common.sh@10 -- # set +x 00:10:24.464 ************************************ 00:10:24.464 START TEST bdev_qos 00:10:24.464 ************************************ 00:10:24.464 20:37:07 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:10:24.464 20:37:07 -- bdev/blockdev.sh@444 -- # QOS_PID=45191 00:10:24.464 Process qos testing pid: 45191 00:10:24.464 20:37:07 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 45191' 00:10:24.464 20:37:07 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:10:24.464 20:37:07 -- bdev/blockdev.sh@447 -- # waitforlisten 45191 00:10:24.464 20:37:07 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:10:24.464 20:37:07 -- common/autotest_common.sh@819 -- # '[' -z 45191 ']' 00:10:24.464 20:37:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.464 20:37:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:24.464 20:37:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.464 20:37:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:24.464 20:37:07 -- common/autotest_common.sh@10 -- # set +x 00:10:24.723 [2024-04-15 20:37:08.085507] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:10:24.723 [2024-04-15 20:37:08.085789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45191 ] 00:10:24.981 [2024-04-15 20:37:08.264056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.981 [2024-04-15 20:37:08.475300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.360 20:37:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:26.360 20:37:09 -- common/autotest_common.sh@852 -- # return 0 00:10:26.360 20:37:09 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:10:26.360 20:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:26.360 20:37:09 -- common/autotest_common.sh@10 -- # set +x 00:10:26.360 Malloc_0 00:10:26.360 20:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:26.360 20:37:09 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:10:26.360 20:37:09 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:10:26.360 20:37:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:26.360 20:37:09 -- common/autotest_common.sh@889 -- # local i 00:10:26.360 20:37:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:26.360 20:37:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:26.360 20:37:09 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:10:26.360 20:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:26.360 20:37:09 -- common/autotest_common.sh@10 -- # set +x 00:10:26.360 20:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:26.360 20:37:09 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:10:26.360 20:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:26.360 20:37:09 -- common/autotest_common.sh@10 -- # set +x 00:10:26.360 [ 00:10:26.360 { 00:10:26.360 "name": "Malloc_0", 00:10:26.360 "aliases": [ 00:10:26.360 "9de9f7a6-e6b6-4baf-955f-2106c7222f51" 00:10:26.360 ], 00:10:26.360 "product_name": "Malloc disk", 00:10:26.360 "block_size": 512, 00:10:26.360 "num_blocks": 262144, 00:10:26.360 "uuid": "9de9f7a6-e6b6-4baf-955f-2106c7222f51", 00:10:26.360 "assigned_rate_limits": { 00:10:26.360 "rw_ios_per_sec": 0, 00:10:26.360 "rw_mbytes_per_sec": 0, 00:10:26.360 "r_mbytes_per_sec": 0, 00:10:26.360 "w_mbytes_per_sec": 0 00:10:26.360 }, 00:10:26.360 "claimed": false, 00:10:26.360 "zoned": false, 00:10:26.360 "supported_io_types": { 00:10:26.360 "read": true, 00:10:26.360 "write": true, 00:10:26.360 "unmap": true, 00:10:26.360 "write_zeroes": true, 00:10:26.360 "flush": true, 00:10:26.360 "reset": true, 00:10:26.360 "compare": false, 00:10:26.360 "compare_and_write": false, 00:10:26.360 "abort": true, 00:10:26.360 "nvme_admin": false, 00:10:26.360 "nvme_io": false 00:10:26.360 }, 00:10:26.360 "memory_domains": [ 00:10:26.360 { 00:10:26.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.360 "dma_device_type": 2 00:10:26.360 } 00:10:26.360 ], 00:10:26.360 "driver_specific": {} 00:10:26.360 } 00:10:26.360 ] 00:10:26.360 20:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:26.360 20:37:09 -- common/autotest_common.sh@895 -- # return 0 00:10:26.360 20:37:09 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:10:26.360 20:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:26.360 20:37:09 -- common/autotest_common.sh@10 -- # set +x 00:10:26.360 Null_1 00:10:26.360 20:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:26.360 20:37:09 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:10:26.360 20:37:09 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:10:26.360 20:37:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:26.360 20:37:09 -- common/autotest_common.sh@889 -- # local i 00:10:26.360 20:37:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:26.360 20:37:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:26.360 20:37:09 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:10:26.360 20:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:26.360 20:37:09 -- common/autotest_common.sh@10 -- # set +x 00:10:26.360 20:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:26.360 20:37:09 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:10:26.360 20:37:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:26.360 20:37:09 -- common/autotest_common.sh@10 -- # set +x 00:10:26.360 [ 00:10:26.360 { 00:10:26.360 "name": "Null_1", 00:10:26.360 "aliases": [ 00:10:26.360 "ff3c2f88-d337-4e64-900a-9be7a95aabd6" 00:10:26.360 ], 00:10:26.360 "product_name": "Null disk", 00:10:26.360 "block_size": 512, 00:10:26.360 "num_blocks": 262144, 00:10:26.360 "uuid": "ff3c2f88-d337-4e64-900a-9be7a95aabd6", 00:10:26.360 "assigned_rate_limits": { 00:10:26.360 "rw_ios_per_sec": 0, 00:10:26.360 "rw_mbytes_per_sec": 0, 00:10:26.360 "r_mbytes_per_sec": 0, 00:10:26.360 "w_mbytes_per_sec": 0 00:10:26.360 }, 00:10:26.360 "claimed": false, 00:10:26.360 "zoned": false, 00:10:26.360 "supported_io_types": { 00:10:26.360 "read": true, 00:10:26.360 "write": true, 00:10:26.360 "unmap": false, 00:10:26.360 "write_zeroes": true, 00:10:26.360 "flush": false, 00:10:26.360 "reset": true, 00:10:26.360 "compare": false, 00:10:26.360 "compare_and_write": false, 00:10:26.360 "abort": true, 00:10:26.360 "nvme_admin": false, 00:10:26.360 "nvme_io": false 00:10:26.360 }, 00:10:26.360 "driver_specific": {} 00:10:26.360 } 00:10:26.360 ] 00:10:26.360 20:37:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:26.360 20:37:09 -- common/autotest_common.sh@895 -- # return 0 00:10:26.360 20:37:09 -- bdev/blockdev.sh@455 -- # qos_function_test 00:10:26.360 20:37:09 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:10:26.360 20:37:09 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:10:26.360 20:37:09 -- bdev/blockdev.sh@410 -- # local io_result=0 00:10:26.360 20:37:09 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:10:26.360 20:37:09 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:26.360 20:37:09 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:10:26.360 20:37:09 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:10:26.360 20:37:09 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:10:26.360 20:37:09 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:10:26.360 20:37:09 -- bdev/blockdev.sh@375 -- # local iostat_result 00:10:26.360 20:37:09 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:26.360 20:37:09 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:10:26.360 20:37:09 -- bdev/blockdev.sh@376 -- # tail -1 00:10:26.360 Running I/O for 60 seconds... 00:10:31.630 20:37:14 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 240845.79 963383.15 0.00 0.00 974848.00 0.00 0.00 ' 00:10:31.630 20:37:14 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:10:31.630 20:37:14 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:10:31.630 20:37:14 -- bdev/blockdev.sh@378 -- # iostat_result=240845.79 00:10:31.630 20:37:14 -- bdev/blockdev.sh@383 -- # echo 240845 00:10:31.630 20:37:14 -- bdev/blockdev.sh@414 -- # io_result=240845 00:10:31.630 20:37:14 -- bdev/blockdev.sh@416 -- # iops_limit=60000 00:10:31.630 20:37:14 -- bdev/blockdev.sh@417 -- # '[' 60000 -gt 1000 ']' 00:10:31.630 20:37:14 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 60000 Malloc_0 00:10:31.630 20:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:31.630 20:37:14 -- common/autotest_common.sh@10 -- # set +x 00:10:31.630 20:37:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:31.630 20:37:14 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 60000 IOPS Malloc_0 00:10:31.630 20:37:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:31.630 20:37:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:31.630 20:37:14 -- common/autotest_common.sh@10 -- # set +x 00:10:31.630 ************************************ 00:10:31.630 START TEST bdev_qos_iops 00:10:31.630 ************************************ 00:10:31.630 20:37:14 -- common/autotest_common.sh@1104 -- # run_qos_test 60000 IOPS Malloc_0 00:10:31.630 20:37:14 -- bdev/blockdev.sh@387 -- # local qos_limit=60000 00:10:31.630 20:37:14 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:10:31.630 20:37:14 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:10:31.630 20:37:14 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:10:31.630 20:37:14 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:10:31.630 20:37:14 -- bdev/blockdev.sh@375 -- # local iostat_result 00:10:31.630 20:37:14 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:31.630 20:37:14 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:10:31.630 20:37:14 -- bdev/blockdev.sh@376 -- # tail -1 00:10:36.901 20:37:20 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 59987.80 239951.20 0.00 0.00 243120.00 0.00 0.00 ' 00:10:36.901 20:37:20 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:10:36.901 20:37:20 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:10:36.902 20:37:20 -- bdev/blockdev.sh@378 -- # iostat_result=59987.80 00:10:36.902 20:37:20 -- bdev/blockdev.sh@383 -- # echo 59987 00:10:36.902 ************************************ 00:10:36.902 END TEST bdev_qos_iops 00:10:36.902 ************************************ 00:10:36.902 20:37:20 -- bdev/blockdev.sh@390 -- # qos_result=59987 00:10:36.902 20:37:20 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:10:36.902 20:37:20 -- bdev/blockdev.sh@394 -- # lower_limit=54000 00:10:36.902 20:37:20 -- bdev/blockdev.sh@395 -- # upper_limit=66000 00:10:36.902 20:37:20 -- bdev/blockdev.sh@398 -- # '[' 59987 -lt 54000 ']' 00:10:36.902 20:37:20 -- bdev/blockdev.sh@398 -- # '[' 59987 -gt 66000 ']' 00:10:36.902 00:10:36.902 real 0m5.187s 00:10:36.902 user 0m0.111s 00:10:36.902 sys 0m0.033s 00:10:36.902 20:37:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.902 20:37:20 -- common/autotest_common.sh@10 -- # set +x 00:10:36.902 20:37:20 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:10:36.902 20:37:20 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:10:36.902 20:37:20 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:10:36.902 20:37:20 -- bdev/blockdev.sh@375 -- # local iostat_result 00:10:36.902 20:37:20 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:36.902 20:37:20 -- bdev/blockdev.sh@376 -- # grep Null_1 00:10:36.902 20:37:20 -- bdev/blockdev.sh@376 -- # tail -1 00:10:42.223 20:37:25 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 64035.91 256143.63 0.00 0.00 260096.00 0.00 0.00 ' 00:10:42.223 20:37:25 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:10:42.223 20:37:25 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:42.223 20:37:25 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:10:42.223 20:37:25 -- bdev/blockdev.sh@380 -- # iostat_result=260096.00 00:10:42.223 20:37:25 -- bdev/blockdev.sh@383 -- # echo 260096 00:10:42.223 20:37:25 -- bdev/blockdev.sh@425 -- # bw_limit=260096 00:10:42.223 20:37:25 -- bdev/blockdev.sh@426 -- # bw_limit=25 00:10:42.223 20:37:25 -- bdev/blockdev.sh@427 -- # '[' 25 -lt 2 ']' 00:10:42.223 20:37:25 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 25 Null_1 00:10:42.223 20:37:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:42.223 20:37:25 -- common/autotest_common.sh@10 -- # set +x 00:10:42.223 20:37:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:42.223 20:37:25 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 25 BANDWIDTH Null_1 00:10:42.223 20:37:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:42.223 20:37:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:42.223 20:37:25 -- common/autotest_common.sh@10 -- # set +x 00:10:42.223 ************************************ 00:10:42.223 START TEST bdev_qos_bw 00:10:42.223 ************************************ 00:10:42.223 20:37:25 -- common/autotest_common.sh@1104 -- # run_qos_test 25 BANDWIDTH Null_1 00:10:42.223 20:37:25 -- bdev/blockdev.sh@387 -- # local qos_limit=25 00:10:42.223 20:37:25 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:10:42.223 20:37:25 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:10:42.223 20:37:25 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:10:42.223 20:37:25 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:10:42.223 20:37:25 -- bdev/blockdev.sh@375 -- # local iostat_result 00:10:42.223 20:37:25 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:42.223 20:37:25 -- bdev/blockdev.sh@376 -- # grep Null_1 00:10:42.223 20:37:25 -- bdev/blockdev.sh@376 -- # tail -1 00:10:47.523 20:37:30 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 6402.44 25609.75 0.00 0.00 25908.00 0.00 0.00 ' 00:10:47.523 20:37:30 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:10:47.523 20:37:30 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:47.523 20:37:30 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:10:47.523 20:37:30 -- bdev/blockdev.sh@380 -- # iostat_result=25908.00 00:10:47.523 20:37:30 -- bdev/blockdev.sh@383 -- # echo 25908 00:10:47.523 ************************************ 00:10:47.523 END TEST bdev_qos_bw 00:10:47.523 ************************************ 00:10:47.523 20:37:30 -- bdev/blockdev.sh@390 -- # qos_result=25908 00:10:47.523 20:37:30 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:47.523 20:37:30 -- bdev/blockdev.sh@392 -- # qos_limit=25600 00:10:47.523 20:37:30 -- bdev/blockdev.sh@394 -- # lower_limit=23040 00:10:47.523 20:37:30 -- bdev/blockdev.sh@395 -- # upper_limit=28160 00:10:47.523 20:37:30 -- bdev/blockdev.sh@398 -- # '[' 25908 -lt 23040 ']' 00:10:47.523 20:37:30 -- bdev/blockdev.sh@398 -- # '[' 25908 -gt 28160 ']' 00:10:47.523 00:10:47.523 real 0m5.162s 00:10:47.523 user 0m0.093s 00:10:47.523 sys 0m0.028s 00:10:47.523 20:37:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.523 20:37:30 -- common/autotest_common.sh@10 -- # set +x 00:10:47.523 20:37:30 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:10:47.523 20:37:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:47.523 20:37:30 -- common/autotest_common.sh@10 -- # set +x 00:10:47.523 20:37:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:47.523 20:37:30 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:10:47.523 20:37:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:47.523 20:37:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.523 20:37:30 -- common/autotest_common.sh@10 -- # set +x 00:10:47.523 ************************************ 00:10:47.523 START TEST bdev_qos_ro_bw 00:10:47.523 ************************************ 00:10:47.523 20:37:30 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:10:47.523 20:37:30 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:10:47.523 20:37:30 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:10:47.523 20:37:30 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:10:47.523 20:37:30 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:10:47.523 20:37:30 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:10:47.523 20:37:30 -- bdev/blockdev.sh@375 -- # local iostat_result 00:10:47.523 20:37:30 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:47.523 20:37:30 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:10:47.523 20:37:30 -- bdev/blockdev.sh@376 -- # tail -1 00:10:52.814 20:37:35 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 512.11 2048.45 0.00 0.00 2068.00 0.00 0.00 ' 00:10:52.814 20:37:35 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:10:52.814 20:37:35 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:52.814 20:37:35 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:10:52.814 20:37:35 -- bdev/blockdev.sh@380 -- # iostat_result=2068.00 00:10:52.814 20:37:35 -- bdev/blockdev.sh@383 -- # echo 2068 00:10:52.814 20:37:35 -- bdev/blockdev.sh@390 -- # qos_result=2068 00:10:52.814 20:37:35 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:52.814 20:37:35 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:10:52.814 20:37:35 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:10:52.814 20:37:35 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:10:52.814 20:37:35 -- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']' 00:10:52.814 20:37:35 -- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']' 00:10:52.814 00:10:52.814 real 0m5.155s 00:10:52.814 user 0m0.091s 00:10:52.814 sys 0m0.034s 00:10:52.814 20:37:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.814 20:37:35 -- common/autotest_common.sh@10 -- # set +x 00:10:52.814 ************************************ 00:10:52.814 END TEST bdev_qos_ro_bw 00:10:52.814 ************************************ 00:10:52.814 20:37:35 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:10:52.814 20:37:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:52.814 20:37:35 -- common/autotest_common.sh@10 -- # set +x 00:10:53.073 20:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:53.073 20:37:36 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:10:53.073 20:37:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:53.073 20:37:36 -- common/autotest_common.sh@10 -- # set +x 00:10:53.073 00:10:53.073 Latency(us) 00:10:53.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.073 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:53.073 Malloc_0 : 26.60 82125.74 320.80 0.00 0.00 3087.54 888.29 505337.83 00:10:53.073 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:53.073 Null_1 : 26.78 73819.21 288.36 0.00 0.00 3465.37 192.46 172657.09 00:10:53.073 =================================================================================================================== 00:10:53.073 Total : 155944.95 609.16 0.00 0.00 3267.01 192.46 505337.83 00:10:53.073 0 00:10:53.073 20:37:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:53.073 20:37:36 -- bdev/blockdev.sh@459 -- # killprocess 45191 00:10:53.073 20:37:36 -- common/autotest_common.sh@926 -- # '[' -z 45191 ']' 00:10:53.073 20:37:36 -- common/autotest_common.sh@930 -- # kill -0 45191 00:10:53.073 20:37:36 -- common/autotest_common.sh@931 -- # uname 00:10:53.073 20:37:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:53.073 20:37:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 45191 00:10:53.073 20:37:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:10:53.073 killing process with pid 45191 00:10:53.073 Received shutdown signal, test time was about 26.828518 seconds 00:10:53.073 00:10:53.073 Latency(us) 00:10:53.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.073 =================================================================================================================== 00:10:53.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:53.073 20:37:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:10:53.073 20:37:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45191' 00:10:53.073 20:37:36 -- common/autotest_common.sh@945 -- # kill 45191 00:10:53.073 20:37:36 -- common/autotest_common.sh@950 -- # wait 45191 00:10:54.979 ************************************ 00:10:54.979 END TEST bdev_qos 00:10:54.979 ************************************ 00:10:54.979 20:37:38 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:10:54.979 00:10:54.979 real 0m30.259s 00:10:54.979 user 0m30.794s 00:10:54.979 sys 0m0.761s 00:10:54.979 20:37:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.979 20:37:38 -- common/autotest_common.sh@10 -- # set +x 00:10:54.979 20:37:38 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:10:54.979 20:37:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:54.979 20:37:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:54.979 20:37:38 -- common/autotest_common.sh@10 -- # set +x 00:10:54.979 ************************************ 00:10:54.979 START TEST bdev_qd_sampling 00:10:54.979 ************************************ 00:10:54.979 20:37:38 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:10:54.979 20:37:38 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:10:54.979 Process bdev QD sampling period testing pid: 45688 00:10:54.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.979 20:37:38 -- bdev/blockdev.sh@539 -- # QD_PID=45688 00:10:54.979 20:37:38 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 45688' 00:10:54.979 20:37:38 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:10:54.979 20:37:38 -- bdev/blockdev.sh@542 -- # waitforlisten 45688 00:10:54.979 20:37:38 -- common/autotest_common.sh@819 -- # '[' -z 45688 ']' 00:10:54.979 20:37:38 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:10:54.979 20:37:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.979 20:37:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:54.979 20:37:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.979 20:37:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:54.979 20:37:38 -- common/autotest_common.sh@10 -- # set +x 00:10:54.979 [2024-04-15 20:37:38.392070] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:10:54.979 [2024-04-15 20:37:38.392218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45688 ] 00:10:55.238 [2024-04-15 20:37:38.540032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:55.497 [2024-04-15 20:37:38.751859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.497 [2024-04-15 20:37:38.751871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.434 20:37:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:56.434 20:37:39 -- common/autotest_common.sh@852 -- # return 0 00:10:56.434 20:37:39 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:10:56.434 20:37:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.434 20:37:39 -- common/autotest_common.sh@10 -- # set +x 00:10:56.434 Malloc_QD 00:10:56.434 20:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.434 20:37:39 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:10:56.434 20:37:39 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:10:56.434 20:37:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:56.434 20:37:39 -- common/autotest_common.sh@889 -- # local i 00:10:56.434 20:37:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:56.434 20:37:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:56.434 20:37:39 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:10:56.434 20:37:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.434 20:37:39 -- common/autotest_common.sh@10 -- # set +x 00:10:56.434 20:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.434 20:37:39 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:10:56.434 20:37:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:56.434 20:37:39 -- common/autotest_common.sh@10 -- # set +x 00:10:56.693 [ 00:10:56.693 { 00:10:56.693 "name": "Malloc_QD", 00:10:56.693 "aliases": [ 00:10:56.693 "8ba39c59-2fa3-4d6d-b4e9-5594898dc62e" 00:10:56.693 ], 00:10:56.693 "product_name": "Malloc disk", 00:10:56.693 "block_size": 512, 00:10:56.693 "num_blocks": 262144, 00:10:56.693 "uuid": "8ba39c59-2fa3-4d6d-b4e9-5594898dc62e", 00:10:56.693 "assigned_rate_limits": { 00:10:56.693 "rw_ios_per_sec": 0, 00:10:56.693 "rw_mbytes_per_sec": 0, 00:10:56.693 "r_mbytes_per_sec": 0, 00:10:56.693 "w_mbytes_per_sec": 0 00:10:56.693 }, 00:10:56.693 "claimed": false, 00:10:56.693 "zoned": false, 00:10:56.693 "supported_io_types": { 00:10:56.693 "read": true, 00:10:56.693 "write": true, 00:10:56.693 "unmap": true, 00:10:56.693 "write_zeroes": true, 00:10:56.693 "flush": true, 00:10:56.693 "reset": true, 00:10:56.693 "compare": false, 00:10:56.693 "compare_and_write": false, 00:10:56.693 "abort": true, 00:10:56.693 "nvme_admin": false, 00:10:56.693 "nvme_io": false 00:10:56.693 }, 00:10:56.693 "memory_domains": [ 00:10:56.693 { 00:10:56.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.693 "dma_device_type": 2 00:10:56.693 } 00:10:56.693 ], 00:10:56.693 "driver_specific": {} 00:10:56.693 } 00:10:56.693 ] 00:10:56.693 20:37:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:56.693 20:37:39 -- common/autotest_common.sh@895 -- # return 0 00:10:56.693 20:37:39 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:56.693 20:37:39 -- bdev/blockdev.sh@548 -- # sleep 2 00:10:56.693 Running I/O for 5 seconds... 00:10:58.652 20:37:41 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:10:58.652 20:37:41 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:10:58.652 20:37:41 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:10:58.652 20:37:41 -- bdev/blockdev.sh@519 -- # local iostats 00:10:58.652 20:37:41 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:10:58.652 20:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:58.652 20:37:41 -- common/autotest_common.sh@10 -- # set +x 00:10:58.652 20:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:58.652 20:37:41 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:10:58.652 20:37:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:58.652 20:37:41 -- common/autotest_common.sh@10 -- # set +x 00:10:58.652 20:37:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:58.652 20:37:41 -- bdev/blockdev.sh@523 -- # iostats='{ 00:10:58.652 "tick_rate": 2490000000, 00:10:58.652 "ticks": 1524297230846, 00:10:58.652 "bdevs": [ 00:10:58.652 { 00:10:58.652 "name": "Malloc_QD", 00:10:58.652 "bytes_read": 2315293184, 00:10:58.652 "num_read_ops": 565251, 00:10:58.652 "bytes_written": 0, 00:10:58.652 "num_write_ops": 0, 00:10:58.652 "bytes_unmapped": 0, 00:10:58.652 "num_unmap_ops": 0, 00:10:58.652 "bytes_copied": 0, 00:10:58.652 "num_copy_ops": 0, 00:10:58.652 "read_latency_ticks": 2458272495874, 00:10:58.652 "max_read_latency_ticks": 14260748, 00:10:58.652 "min_read_latency_ticks": 274032, 00:10:58.652 "write_latency_ticks": 0, 00:10:58.652 "max_write_latency_ticks": 0, 00:10:58.652 "min_write_latency_ticks": 0, 00:10:58.652 "unmap_latency_ticks": 0, 00:10:58.652 "max_unmap_latency_ticks": 0, 00:10:58.652 "min_unmap_latency_ticks": 0, 00:10:58.652 "copy_latency_ticks": 0, 00:10:58.652 "max_copy_latency_ticks": 0, 00:10:58.652 "min_copy_latency_ticks": 0, 00:10:58.652 "io_error": {}, 00:10:58.652 "queue_depth_polling_period": 10, 00:10:58.652 "queue_depth": 512, 00:10:58.652 "io_time": 90, 00:10:58.652 "weighted_io_time": 46080 00:10:58.652 } 00:10:58.652 ] 00:10:58.652 }' 00:10:58.652 20:37:41 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:10:58.652 20:37:42 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:10:58.652 20:37:42 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:10:58.652 20:37:42 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:10:58.652 20:37:42 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:10:58.652 20:37:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:58.652 20:37:42 -- common/autotest_common.sh@10 -- # set +x 00:10:58.652 00:10:58.652 Latency(us) 00:10:58.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.652 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:10:58.652 Malloc_QD : 2.01 146463.89 572.12 0.00 0.00 1745.15 424.40 5737.69 00:10:58.652 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:58.652 Malloc_QD : 2.01 146420.62 571.96 0.00 0.00 1746.02 279.65 5632.41 00:10:58.652 =================================================================================================================== 00:10:58.652 Total : 292884.51 1144.08 0.00 0.00 1745.59 279.65 5737.69 00:10:58.910 0 00:10:58.910 20:37:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:58.910 20:37:42 -- bdev/blockdev.sh@552 -- # killprocess 45688 00:10:58.910 20:37:42 -- common/autotest_common.sh@926 -- # '[' -z 45688 ']' 00:10:58.910 20:37:42 -- common/autotest_common.sh@930 -- # kill -0 45688 00:10:58.910 20:37:42 -- common/autotest_common.sh@931 -- # uname 00:10:58.910 20:37:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:58.910 20:37:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 45688 00:10:58.910 killing process with pid 45688 00:10:58.910 Received shutdown signal, test time was about 2.153172 seconds 00:10:58.910 00:10:58.910 Latency(us) 00:10:58.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.910 =================================================================================================================== 00:10:58.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:58.910 20:37:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:58.910 20:37:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:58.910 20:37:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45688' 00:10:58.910 20:37:42 -- common/autotest_common.sh@945 -- # kill 45688 00:10:58.910 20:37:42 -- common/autotest_common.sh@950 -- # wait 45688 00:11:00.287 ************************************ 00:11:00.287 END TEST bdev_qd_sampling 00:11:00.287 ************************************ 00:11:00.287 20:37:43 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:11:00.287 00:11:00.287 real 0m5.518s 00:11:00.287 user 0m10.255s 00:11:00.287 sys 0m0.372s 00:11:00.287 20:37:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.287 20:37:43 -- common/autotest_common.sh@10 -- # set +x 00:11:00.546 20:37:43 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:11:00.546 20:37:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:00.546 20:37:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:00.546 20:37:43 -- common/autotest_common.sh@10 -- # set +x 00:11:00.546 ************************************ 00:11:00.546 START TEST bdev_error 00:11:00.546 ************************************ 00:11:00.546 Process error testing pid: 45793 00:11:00.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.546 20:37:43 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:11:00.546 20:37:43 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:11:00.546 20:37:43 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:11:00.546 20:37:43 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:11:00.546 20:37:43 -- bdev/blockdev.sh@470 -- # ERR_PID=45793 00:11:00.546 20:37:43 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 45793' 00:11:00.546 20:37:43 -- bdev/blockdev.sh@472 -- # waitforlisten 45793 00:11:00.546 20:37:43 -- common/autotest_common.sh@819 -- # '[' -z 45793 ']' 00:11:00.546 20:37:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.546 20:37:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:00.546 20:37:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.546 20:37:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:00.546 20:37:43 -- common/autotest_common.sh@10 -- # set +x 00:11:00.546 20:37:43 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:11:00.546 [2024-04-15 20:37:43.961946] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:11:00.546 [2024-04-15 20:37:43.962099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45793 ] 00:11:00.805 [2024-04-15 20:37:44.116199] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.065 [2024-04-15 20:37:44.320291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.030 20:37:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:02.030 20:37:45 -- common/autotest_common.sh@852 -- # return 0 00:11:02.030 20:37:45 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:11:02.030 20:37:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.030 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.030 Dev_1 00:11:02.030 20:37:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.030 20:37:45 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:11:02.030 20:37:45 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:11:02.030 20:37:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:02.030 20:37:45 -- common/autotest_common.sh@889 -- # local i 00:11:02.030 20:37:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:02.030 20:37:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:02.030 20:37:45 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:11:02.030 20:37:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.030 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.030 20:37:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.030 20:37:45 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:11:02.030 20:37:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.030 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.030 [ 00:11:02.030 { 00:11:02.030 "name": "Dev_1", 00:11:02.030 "aliases": [ 00:11:02.030 "25f084d5-c219-4c73-8aa2-8e26c6da9cac" 00:11:02.030 ], 00:11:02.030 "product_name": "Malloc disk", 00:11:02.030 "block_size": 512, 00:11:02.030 "num_blocks": 262144, 00:11:02.030 "uuid": "25f084d5-c219-4c73-8aa2-8e26c6da9cac", 00:11:02.030 "assigned_rate_limits": { 00:11:02.030 "rw_ios_per_sec": 0, 00:11:02.030 "rw_mbytes_per_sec": 0, 00:11:02.030 "r_mbytes_per_sec": 0, 00:11:02.030 "w_mbytes_per_sec": 0 00:11:02.030 }, 00:11:02.030 "claimed": false, 00:11:02.030 "zoned": false, 00:11:02.030 "supported_io_types": { 00:11:02.030 "read": true, 00:11:02.030 "write": true, 00:11:02.030 "unmap": true, 00:11:02.030 "write_zeroes": true, 00:11:02.030 "flush": true, 00:11:02.030 "reset": true, 00:11:02.030 "compare": false, 00:11:02.030 "compare_and_write": false, 00:11:02.030 "abort": true, 00:11:02.030 "nvme_admin": false, 00:11:02.030 "nvme_io": false 00:11:02.030 }, 00:11:02.030 "memory_domains": [ 00:11:02.030 { 00:11:02.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.030 "dma_device_type": 2 00:11:02.030 } 00:11:02.030 ], 00:11:02.030 "driver_specific": {} 00:11:02.030 } 00:11:02.030 ] 00:11:02.030 20:37:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.030 20:37:45 -- common/autotest_common.sh@895 -- # return 0 00:11:02.030 20:37:45 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:11:02.030 20:37:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.030 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.030 true 00:11:02.030 20:37:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.030 20:37:45 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:11:02.030 20:37:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.030 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.289 Dev_2 00:11:02.289 20:37:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.289 20:37:45 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:11:02.289 20:37:45 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:11:02.289 20:37:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:02.289 20:37:45 -- common/autotest_common.sh@889 -- # local i 00:11:02.289 20:37:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:02.289 20:37:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:02.289 20:37:45 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:11:02.289 20:37:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.289 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.289 20:37:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.289 20:37:45 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:11:02.289 20:37:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.289 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.289 [ 00:11:02.289 { 00:11:02.289 "name": "Dev_2", 00:11:02.289 "aliases": [ 00:11:02.289 "7468d4f9-00c0-447e-9b4f-7e1ff2bc9c73" 00:11:02.289 ], 00:11:02.289 "product_name": "Malloc disk", 00:11:02.289 "block_size": 512, 00:11:02.289 "num_blocks": 262144, 00:11:02.289 "uuid": "7468d4f9-00c0-447e-9b4f-7e1ff2bc9c73", 00:11:02.289 "assigned_rate_limits": { 00:11:02.289 "rw_ios_per_sec": 0, 00:11:02.289 "rw_mbytes_per_sec": 0, 00:11:02.289 "r_mbytes_per_sec": 0, 00:11:02.289 "w_mbytes_per_sec": 0 00:11:02.289 }, 00:11:02.289 "claimed": false, 00:11:02.289 "zoned": false, 00:11:02.289 "supported_io_types": { 00:11:02.289 "read": true, 00:11:02.289 "write": true, 00:11:02.289 "unmap": true, 00:11:02.289 "write_zeroes": true, 00:11:02.289 "flush": true, 00:11:02.289 "reset": true, 00:11:02.289 "compare": false, 00:11:02.289 "compare_and_write": false, 00:11:02.289 "abort": true, 00:11:02.289 "nvme_admin": false, 00:11:02.289 "nvme_io": false 00:11:02.289 }, 00:11:02.289 "memory_domains": [ 00:11:02.289 { 00:11:02.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.289 "dma_device_type": 2 00:11:02.289 } 00:11:02.289 ], 00:11:02.289 "driver_specific": {} 00:11:02.289 } 00:11:02.289 ] 00:11:02.289 20:37:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.289 20:37:45 -- common/autotest_common.sh@895 -- # return 0 00:11:02.289 20:37:45 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:11:02.289 20:37:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:02.289 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:11:02.289 20:37:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:02.289 20:37:45 -- bdev/blockdev.sh@482 -- # sleep 1 00:11:02.289 20:37:45 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:11:02.548 Running I/O for 5 seconds... 00:11:03.527 Process is existed as continue on error is set. Pid: 45793 00:11:03.527 20:37:46 -- bdev/blockdev.sh@485 -- # kill -0 45793 00:11:03.527 20:37:46 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 45793' 00:11:03.527 20:37:46 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:11:03.527 20:37:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.527 20:37:46 -- common/autotest_common.sh@10 -- # set +x 00:11:03.527 20:37:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.527 20:37:46 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:11:03.527 20:37:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:03.527 20:37:46 -- common/autotest_common.sh@10 -- # set +x 00:11:03.527 Timeout while waiting for response: 00:11:03.527 00:11:03.527 00:11:03.786 20:37:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:03.786 20:37:47 -- bdev/blockdev.sh@495 -- # sleep 5 00:11:07.983 00:11:07.983 Latency(us) 00:11:07.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.983 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:07.984 EE_Dev_1 : 0.90 137708.54 537.92 5.54 0.00 115.46 51.20 1763.42 00:11:07.984 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:07.984 Dev_2 : 5.00 273920.75 1070.00 0.00 0.00 57.69 19.43 362158.78 00:11:07.984 =================================================================================================================== 00:11:07.984 Total : 411629.29 1607.93 5.54 0.00 62.49 19.43 362158.78 00:11:08.952 20:37:52 -- bdev/blockdev.sh@497 -- # killprocess 45793 00:11:08.952 20:37:52 -- common/autotest_common.sh@926 -- # '[' -z 45793 ']' 00:11:08.952 20:37:52 -- common/autotest_common.sh@930 -- # kill -0 45793 00:11:08.952 20:37:52 -- common/autotest_common.sh@931 -- # uname 00:11:08.952 20:37:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:08.952 20:37:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 45793 00:11:08.952 20:37:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:08.952 20:37:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:08.952 killing process with pid 45793 00:11:08.952 Received shutdown signal, test time was about 5.000000 seconds 00:11:08.952 00:11:08.952 Latency(us) 00:11:08.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.952 =================================================================================================================== 00:11:08.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:08.952 20:37:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 45793' 00:11:08.952 20:37:52 -- common/autotest_common.sh@945 -- # kill 45793 00:11:08.952 20:37:52 -- common/autotest_common.sh@950 -- # wait 45793 00:11:10.337 Process error testing pid: 45931 00:11:10.337 20:37:53 -- bdev/blockdev.sh@501 -- # ERR_PID=45931 00:11:10.337 20:37:53 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 45931' 00:11:10.337 20:37:53 -- bdev/blockdev.sh@503 -- # waitforlisten 45931 00:11:10.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.337 20:37:53 -- common/autotest_common.sh@819 -- # '[' -z 45931 ']' 00:11:10.337 20:37:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.337 20:37:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:10.337 20:37:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.337 20:37:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:10.337 20:37:53 -- common/autotest_common.sh@10 -- # set +x 00:11:10.337 20:37:53 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:11:10.595 [2024-04-15 20:37:53.953682] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:11:10.595 [2024-04-15 20:37:53.953834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45931 ] 00:11:10.854 [2024-04-15 20:37:54.107222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.854 [2024-04-15 20:37:54.316857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.232 20:37:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:12.232 20:37:55 -- common/autotest_common.sh@852 -- # return 0 00:11:12.232 20:37:55 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:11:12.232 20:37:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.232 20:37:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.232 Dev_1 00:11:12.232 20:37:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.232 20:37:55 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:11:12.232 20:37:55 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:11:12.232 20:37:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:12.232 20:37:55 -- common/autotest_common.sh@889 -- # local i 00:11:12.232 20:37:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:12.232 20:37:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:12.232 20:37:55 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:11:12.232 20:37:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.232 20:37:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.232 20:37:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.232 20:37:55 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:11:12.232 20:37:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.232 20:37:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.232 [ 00:11:12.232 { 00:11:12.232 "name": "Dev_1", 00:11:12.232 "aliases": [ 00:11:12.232 "8c866d96-67e5-45eb-9292-d8ab590ffb15" 00:11:12.232 ], 00:11:12.232 "product_name": "Malloc disk", 00:11:12.232 "block_size": 512, 00:11:12.232 "num_blocks": 262144, 00:11:12.232 "uuid": "8c866d96-67e5-45eb-9292-d8ab590ffb15", 00:11:12.232 "assigned_rate_limits": { 00:11:12.232 "rw_ios_per_sec": 0, 00:11:12.232 "rw_mbytes_per_sec": 0, 00:11:12.232 "r_mbytes_per_sec": 0, 00:11:12.232 "w_mbytes_per_sec": 0 00:11:12.232 }, 00:11:12.232 "claimed": false, 00:11:12.232 "zoned": false, 00:11:12.232 "supported_io_types": { 00:11:12.232 "read": true, 00:11:12.232 "write": true, 00:11:12.232 "unmap": true, 00:11:12.232 "write_zeroes": true, 00:11:12.232 "flush": true, 00:11:12.232 "reset": true, 00:11:12.233 "compare": false, 00:11:12.233 "compare_and_write": false, 00:11:12.233 "abort": true, 00:11:12.233 "nvme_admin": false, 00:11:12.233 "nvme_io": false 00:11:12.233 }, 00:11:12.233 "memory_domains": [ 00:11:12.233 { 00:11:12.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.233 "dma_device_type": 2 00:11:12.233 } 00:11:12.233 ], 00:11:12.233 "driver_specific": {} 00:11:12.233 } 00:11:12.233 ] 00:11:12.233 20:37:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.233 20:37:55 -- common/autotest_common.sh@895 -- # return 0 00:11:12.233 20:37:55 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:11:12.233 20:37:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.233 20:37:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.233 true 00:11:12.233 20:37:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.233 20:37:55 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:11:12.233 20:37:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.233 20:37:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.233 Dev_2 00:11:12.233 20:37:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.233 20:37:55 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:11:12.233 20:37:55 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:11:12.233 20:37:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:12.233 20:37:55 -- common/autotest_common.sh@889 -- # local i 00:11:12.233 20:37:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:12.233 20:37:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:12.233 20:37:55 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:11:12.233 20:37:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.233 20:37:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.233 20:37:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.233 20:37:55 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:11:12.233 20:37:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.233 20:37:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.233 [ 00:11:12.233 { 00:11:12.233 "name": "Dev_2", 00:11:12.233 "aliases": [ 00:11:12.233 "3936311e-627e-46ba-99f4-66bf6fdaec0f" 00:11:12.233 ], 00:11:12.233 "product_name": "Malloc disk", 00:11:12.233 "block_size": 512, 00:11:12.233 "num_blocks": 262144, 00:11:12.233 "uuid": "3936311e-627e-46ba-99f4-66bf6fdaec0f", 00:11:12.233 "assigned_rate_limits": { 00:11:12.233 "rw_ios_per_sec": 0, 00:11:12.233 "rw_mbytes_per_sec": 0, 00:11:12.233 "r_mbytes_per_sec": 0, 00:11:12.233 "w_mbytes_per_sec": 0 00:11:12.233 }, 00:11:12.233 "claimed": false, 00:11:12.233 "zoned": false, 00:11:12.233 "supported_io_types": { 00:11:12.233 "read": true, 00:11:12.233 "write": true, 00:11:12.233 "unmap": true, 00:11:12.233 "write_zeroes": true, 00:11:12.233 "flush": true, 00:11:12.233 "reset": true, 00:11:12.233 "compare": false, 00:11:12.233 "compare_and_write": false, 00:11:12.233 "abort": true, 00:11:12.233 "nvme_admin": false, 00:11:12.233 "nvme_io": false 00:11:12.233 }, 00:11:12.233 "memory_domains": [ 00:11:12.233 { 00:11:12.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.233 "dma_device_type": 2 00:11:12.233 } 00:11:12.233 ], 00:11:12.233 "driver_specific": {} 00:11:12.233 } 00:11:12.233 ] 00:11:12.233 20:37:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.233 20:37:55 -- common/autotest_common.sh@895 -- # return 0 00:11:12.233 20:37:55 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:11:12.233 20:37:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:12.233 20:37:55 -- common/autotest_common.sh@10 -- # set +x 00:11:12.233 20:37:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:12.233 20:37:55 -- bdev/blockdev.sh@513 -- # NOT wait 45931 00:11:12.233 20:37:55 -- common/autotest_common.sh@640 -- # local es=0 00:11:12.233 20:37:55 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 45931 00:11:12.233 20:37:55 -- common/autotest_common.sh@628 -- # local arg=wait 00:11:12.233 20:37:55 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:11:12.233 20:37:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:12.233 20:37:55 -- common/autotest_common.sh@632 -- # type -t wait 00:11:12.233 20:37:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:12.233 20:37:55 -- common/autotest_common.sh@643 -- # wait 45931 00:11:12.492 Running I/O for 5 seconds... 00:11:12.493 task offset: 39304 on job bdev=EE_Dev_1 fails 00:11:12.493 00:11:12.493 Latency(us) 00:11:12.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.493 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:12.493 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:11:12.493 EE_Dev_1 : 0.00 46121.59 180.16 10482.18 0.00 197.52 46.47 371.77 00:11:12.493 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:12.493 Dev_2 : 0.00 46511.63 181.69 0.00 0.00 263.11 43.39 496.78 00:11:12.493 =================================================================================================================== 00:11:12.493 Total : 92633.22 361.85 10482.18 0.00 233.09 43.39 496.78 00:11:12.493 [2024-04-15 20:37:55.794724] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:12.493 request: 00:11:12.493 { 00:11:12.493 "method": "perform_tests", 00:11:12.493 "req_id": 1 00:11:12.493 } 00:11:12.493 Got JSON-RPC error response 00:11:12.493 response: 00:11:12.493 { 00:11:12.493 "code": -32603, 00:11:12.493 "message": "bdevperf failed with error Operation not permitted" 00:11:12.493 } 00:11:14.399 ************************************ 00:11:14.399 END TEST bdev_error 00:11:14.399 ************************************ 00:11:14.399 20:37:57 -- common/autotest_common.sh@643 -- # es=255 00:11:14.399 20:37:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:14.399 20:37:57 -- common/autotest_common.sh@652 -- # es=127 00:11:14.399 20:37:57 -- common/autotest_common.sh@653 -- # case "$es" in 00:11:14.399 20:37:57 -- common/autotest_common.sh@660 -- # es=1 00:11:14.399 20:37:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:14.399 00:11:14.399 real 0m13.977s 00:11:14.399 user 0m14.029s 00:11:14.399 sys 0m0.836s 00:11:14.399 20:37:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.399 20:37:57 -- common/autotest_common.sh@10 -- # set +x 00:11:14.399 20:37:57 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:11:14.399 20:37:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:14.399 20:37:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.399 20:37:57 -- common/autotest_common.sh@10 -- # set +x 00:11:14.399 ************************************ 00:11:14.399 START TEST bdev_stat 00:11:14.399 ************************************ 00:11:14.399 20:37:57 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:11:14.399 20:37:57 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:11:14.399 Process Bdev IO statistics testing pid: 46009 00:11:14.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.399 20:37:57 -- bdev/blockdev.sh@594 -- # STAT_PID=46009 00:11:14.399 20:37:57 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 46009' 00:11:14.399 20:37:57 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:11:14.399 20:37:57 -- bdev/blockdev.sh@597 -- # waitforlisten 46009 00:11:14.399 20:37:57 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:11:14.399 20:37:57 -- common/autotest_common.sh@819 -- # '[' -z 46009 ']' 00:11:14.399 20:37:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.399 20:37:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:14.399 20:37:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.399 20:37:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:14.399 20:37:57 -- common/autotest_common.sh@10 -- # set +x 00:11:14.659 [2024-04-15 20:37:58.019694] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:11:14.659 [2024-04-15 20:37:58.019849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46009 ] 00:11:14.918 [2024-04-15 20:37:58.190707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:14.918 [2024-04-15 20:37:58.399779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.918 [2024-04-15 20:37:58.399795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.297 20:37:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:16.297 20:37:59 -- common/autotest_common.sh@852 -- # return 0 00:11:16.297 20:37:59 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:11:16.297 20:37:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.297 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:11:16.297 Malloc_STAT 00:11:16.297 20:37:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.297 20:37:59 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:11:16.297 20:37:59 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:11:16.297 20:37:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:16.297 20:37:59 -- common/autotest_common.sh@889 -- # local i 00:11:16.297 20:37:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:16.297 20:37:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:16.297 20:37:59 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:11:16.297 20:37:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.297 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:11:16.297 20:37:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.297 20:37:59 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:11:16.297 20:37:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.297 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:11:16.297 [ 00:11:16.297 { 00:11:16.297 "name": "Malloc_STAT", 00:11:16.297 "aliases": [ 00:11:16.297 "fe4e9377-31e3-4b7d-9533-3ebd32ae4348" 00:11:16.297 ], 00:11:16.297 "product_name": "Malloc disk", 00:11:16.297 "block_size": 512, 00:11:16.297 "num_blocks": 262144, 00:11:16.297 "uuid": "fe4e9377-31e3-4b7d-9533-3ebd32ae4348", 00:11:16.297 "assigned_rate_limits": { 00:11:16.297 "rw_ios_per_sec": 0, 00:11:16.297 "rw_mbytes_per_sec": 0, 00:11:16.297 "r_mbytes_per_sec": 0, 00:11:16.297 "w_mbytes_per_sec": 0 00:11:16.297 }, 00:11:16.297 "claimed": false, 00:11:16.297 "zoned": false, 00:11:16.297 "supported_io_types": { 00:11:16.297 "read": true, 00:11:16.297 "write": true, 00:11:16.297 "unmap": true, 00:11:16.297 "write_zeroes": true, 00:11:16.297 "flush": true, 00:11:16.297 "reset": true, 00:11:16.297 "compare": false, 00:11:16.297 "compare_and_write": false, 00:11:16.297 "abort": true, 00:11:16.297 "nvme_admin": false, 00:11:16.297 "nvme_io": false 00:11:16.297 }, 00:11:16.297 "memory_domains": [ 00:11:16.297 { 00:11:16.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.297 "dma_device_type": 2 00:11:16.297 } 00:11:16.297 ], 00:11:16.297 "driver_specific": {} 00:11:16.297 } 00:11:16.297 ] 00:11:16.297 20:37:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.297 20:37:59 -- common/autotest_common.sh@895 -- # return 0 00:11:16.297 20:37:59 -- bdev/blockdev.sh@603 -- # sleep 2 00:11:16.297 20:37:59 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:16.297 Running I/O for 10 seconds... 00:11:18.203 20:38:01 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:11:18.203 20:38:01 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:11:18.203 20:38:01 -- bdev/blockdev.sh@558 -- # local iostats 00:11:18.203 20:38:01 -- bdev/blockdev.sh@559 -- # local io_count1 00:11:18.203 20:38:01 -- bdev/blockdev.sh@560 -- # local io_count2 00:11:18.204 20:38:01 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:11:18.204 20:38:01 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:11:18.204 20:38:01 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:11:18.204 20:38:01 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:11:18.204 20:38:01 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:11:18.204 20:38:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:18.204 20:38:01 -- common/autotest_common.sh@10 -- # set +x 00:11:18.204 20:38:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:18.204 20:38:01 -- bdev/blockdev.sh@566 -- # iostats='{ 00:11:18.204 "tick_rate": 2490000000, 00:11:18.204 "ticks": 1573451629338, 00:11:18.204 "bdevs": [ 00:11:18.204 { 00:11:18.204 "name": "Malloc_STAT", 00:11:18.204 "bytes_read": 2304807424, 00:11:18.204 "num_read_ops": 562691, 00:11:18.204 "bytes_written": 0, 00:11:18.204 "num_write_ops": 0, 00:11:18.204 "bytes_unmapped": 0, 00:11:18.204 "num_unmap_ops": 0, 00:11:18.204 "bytes_copied": 0, 00:11:18.204 "num_copy_ops": 0, 00:11:18.204 "read_latency_ticks": 2456586965566, 00:11:18.204 "max_read_latency_ticks": 7218924, 00:11:18.204 "min_read_latency_ticks": 275120, 00:11:18.204 "write_latency_ticks": 0, 00:11:18.204 "max_write_latency_ticks": 0, 00:11:18.204 "min_write_latency_ticks": 0, 00:11:18.204 "unmap_latency_ticks": 0, 00:11:18.204 "max_unmap_latency_ticks": 0, 00:11:18.204 "min_unmap_latency_ticks": 0, 00:11:18.204 "copy_latency_ticks": 0, 00:11:18.204 "max_copy_latency_ticks": 0, 00:11:18.204 "min_copy_latency_ticks": 0, 00:11:18.204 "io_error": {} 00:11:18.204 } 00:11:18.204 ] 00:11:18.204 }' 00:11:18.204 20:38:01 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:11:18.463 20:38:01 -- bdev/blockdev.sh@567 -- # io_count1=562691 00:11:18.463 20:38:01 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:11:18.463 20:38:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:18.463 20:38:01 -- common/autotest_common.sh@10 -- # set +x 00:11:18.463 20:38:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:18.463 20:38:01 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:11:18.463 "tick_rate": 2490000000, 00:11:18.463 "ticks": 1573640376040, 00:11:18.463 "name": "Malloc_STAT", 00:11:18.463 "channels": [ 00:11:18.463 { 00:11:18.463 "thread_id": 2, 00:11:18.463 "bytes_read": 1176502272, 00:11:18.463 "num_read_ops": 287232, 00:11:18.463 "bytes_written": 0, 00:11:18.463 "num_write_ops": 0, 00:11:18.463 "bytes_unmapped": 0, 00:11:18.463 "num_unmap_ops": 0, 00:11:18.463 "bytes_copied": 0, 00:11:18.463 "num_copy_ops": 0, 00:11:18.463 "read_latency_ticks": 1276001454938, 00:11:18.463 "max_read_latency_ticks": 5790674, 00:11:18.463 "min_read_latency_ticks": 2091998, 00:11:18.463 "write_latency_ticks": 0, 00:11:18.463 "max_write_latency_ticks": 0, 00:11:18.463 "min_write_latency_ticks": 0, 00:11:18.463 "unmap_latency_ticks": 0, 00:11:18.463 "max_unmap_latency_ticks": 0, 00:11:18.463 "min_unmap_latency_ticks": 0, 00:11:18.463 "copy_latency_ticks": 0, 00:11:18.463 "max_copy_latency_ticks": 0, 00:11:18.463 "min_copy_latency_ticks": 0 00:11:18.463 }, 00:11:18.463 { 00:11:18.463 "thread_id": 3, 00:11:18.463 "bytes_read": 1216348160, 00:11:18.463 "num_read_ops": 296960, 00:11:18.463 "bytes_written": 0, 00:11:18.463 "num_write_ops": 0, 00:11:18.463 "bytes_unmapped": 0, 00:11:18.463 "num_unmap_ops": 0, 00:11:18.463 "bytes_copied": 0, 00:11:18.463 "num_copy_ops": 0, 00:11:18.463 "read_latency_ticks": 1276494936024, 00:11:18.464 "max_read_latency_ticks": 7218924, 00:11:18.464 "min_read_latency_ticks": 3664282, 00:11:18.464 "write_latency_ticks": 0, 00:11:18.464 "max_write_latency_ticks": 0, 00:11:18.464 "min_write_latency_ticks": 0, 00:11:18.464 "unmap_latency_ticks": 0, 00:11:18.464 "max_unmap_latency_ticks": 0, 00:11:18.464 "min_unmap_latency_ticks": 0, 00:11:18.464 "copy_latency_ticks": 0, 00:11:18.464 "max_copy_latency_ticks": 0, 00:11:18.464 "min_copy_latency_ticks": 0 00:11:18.464 } 00:11:18.464 ] 00:11:18.464 }' 00:11:18.464 20:38:01 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:11:18.464 20:38:01 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=287232 00:11:18.464 20:38:01 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=287232 00:11:18.464 20:38:01 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:11:18.464 20:38:01 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=296960 00:11:18.464 20:38:01 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=584192 00:11:18.464 20:38:01 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:11:18.464 20:38:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:18.464 20:38:01 -- common/autotest_common.sh@10 -- # set +x 00:11:18.464 20:38:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:18.464 20:38:01 -- bdev/blockdev.sh@575 -- # iostats='{ 00:11:18.464 "tick_rate": 2490000000, 00:11:18.464 "ticks": 1573935304122, 00:11:18.464 "bdevs": [ 00:11:18.464 { 00:11:18.464 "name": "Malloc_STAT", 00:11:18.464 "bytes_read": 2533396992, 00:11:18.464 "num_read_ops": 618499, 00:11:18.464 "bytes_written": 0, 00:11:18.464 "num_write_ops": 0, 00:11:18.464 "bytes_unmapped": 0, 00:11:18.464 "num_unmap_ops": 0, 00:11:18.464 "bytes_copied": 0, 00:11:18.464 "num_copy_ops": 0, 00:11:18.464 "read_latency_ticks": 2703936057058, 00:11:18.464 "max_read_latency_ticks": 7218924, 00:11:18.464 "min_read_latency_ticks": 275120, 00:11:18.464 "write_latency_ticks": 0, 00:11:18.464 "max_write_latency_ticks": 0, 00:11:18.464 "min_write_latency_ticks": 0, 00:11:18.464 "unmap_latency_ticks": 0, 00:11:18.464 "max_unmap_latency_ticks": 0, 00:11:18.464 "min_unmap_latency_ticks": 0, 00:11:18.464 "copy_latency_ticks": 0, 00:11:18.464 "max_copy_latency_ticks": 0, 00:11:18.464 "min_copy_latency_ticks": 0, 00:11:18.464 "io_error": {} 00:11:18.464 } 00:11:18.464 ] 00:11:18.464 }' 00:11:18.464 20:38:01 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:11:18.464 20:38:01 -- bdev/blockdev.sh@576 -- # io_count2=618499 00:11:18.464 20:38:01 -- bdev/blockdev.sh@581 -- # '[' 584192 -lt 562691 ']' 00:11:18.464 20:38:01 -- bdev/blockdev.sh@581 -- # '[' 584192 -gt 618499 ']' 00:11:18.464 20:38:01 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:11:18.464 20:38:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:18.464 20:38:01 -- common/autotest_common.sh@10 -- # set +x 00:11:18.464 00:11:18.464 Latency(us) 00:11:18.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:18.464 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:11:18.464 Malloc_STAT : 2.19 143222.93 559.46 0.00 0.00 1784.90 454.01 2329.29 00:11:18.464 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:18.464 Malloc_STAT : 2.19 147967.36 578.00 0.00 0.00 1727.82 276.36 2908.32 00:11:18.464 =================================================================================================================== 00:11:18.464 Total : 291190.29 1137.46 0.00 0.00 1755.89 276.36 2908.32 00:11:18.724 0 00:11:18.724 20:38:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:18.724 20:38:02 -- bdev/blockdev.sh@607 -- # killprocess 46009 00:11:18.724 20:38:02 -- common/autotest_common.sh@926 -- # '[' -z 46009 ']' 00:11:18.724 20:38:02 -- common/autotest_common.sh@930 -- # kill -0 46009 00:11:18.724 20:38:02 -- common/autotest_common.sh@931 -- # uname 00:11:18.724 20:38:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:18.724 20:38:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 46009 00:11:18.724 killing process with pid 46009 00:11:18.724 20:38:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:18.724 20:38:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:18.724 20:38:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46009' 00:11:18.724 20:38:02 -- common/autotest_common.sh@945 -- # kill 46009 00:11:18.724 Received shutdown signal, test time was about 2.351430 seconds 00:11:18.724 00:11:18.724 Latency(us) 00:11:18.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:18.724 =================================================================================================================== 00:11:18.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:18.724 20:38:02 -- common/autotest_common.sh@950 -- # wait 46009 00:11:20.628 ************************************ 00:11:20.628 END TEST bdev_stat 00:11:20.628 ************************************ 00:11:20.628 20:38:03 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:11:20.628 00:11:20.628 real 0m5.817s 00:11:20.628 user 0m11.015s 00:11:20.628 sys 0m0.431s 00:11:20.628 20:38:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.628 20:38:03 -- common/autotest_common.sh@10 -- # set +x 00:11:20.628 20:38:03 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:11:20.628 20:38:03 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:11:20.628 20:38:03 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:11:20.628 20:38:03 -- bdev/blockdev.sh@809 -- # cleanup 00:11:20.628 20:38:03 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:20.628 20:38:03 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:20.628 20:38:03 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:11:20.628 20:38:03 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:11:20.628 20:38:03 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:11:20.628 20:38:03 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:11:20.628 ************************************ 00:11:20.628 END TEST blockdev_general 00:11:20.628 ************************************ 00:11:20.628 00:11:20.628 real 2m6.948s 00:11:20.628 user 5m32.042s 00:11:20.628 sys 0m9.211s 00:11:20.628 20:38:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.628 20:38:03 -- common/autotest_common.sh@10 -- # set +x 00:11:20.628 20:38:03 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:20.628 20:38:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:20.628 20:38:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.628 20:38:03 -- common/autotest_common.sh@10 -- # set +x 00:11:20.628 ************************************ 00:11:20.628 START TEST bdev_raid 00:11:20.628 ************************************ 00:11:20.628 20:38:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:20.628 * Looking for test storage... 00:11:20.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:20.628 20:38:03 -- bdev/nbd_common.sh@6 -- # set -e 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@716 -- # uname -s 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:11:20.628 modprobe: FATAL: Module nbd not found. 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:11:20.628 20:38:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:20.628 20:38:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.628 20:38:03 -- common/autotest_common.sh@10 -- # set +x 00:11:20.628 ************************************ 00:11:20.628 START TEST raid0_resize_test 00:11:20.628 ************************************ 00:11:20.628 20:38:03 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:11:20.628 Process raid pid: 46184 00:11:20.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@301 -- # raid_pid=46184 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 46184' 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@303 -- # waitforlisten 46184 /var/tmp/spdk-raid.sock 00:11:20.628 20:38:03 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:20.628 20:38:03 -- common/autotest_common.sh@819 -- # '[' -z 46184 ']' 00:11:20.628 20:38:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:20.628 20:38:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:20.628 20:38:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:20.628 20:38:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:20.629 20:38:03 -- common/autotest_common.sh@10 -- # set +x 00:11:20.888 [2024-04-15 20:38:04.143936] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:11:20.888 [2024-04-15 20:38:04.144078] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.888 [2024-04-15 20:38:04.291786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.159 [2024-04-15 20:38:04.482336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.451 [2024-04-15 20:38:04.680839] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.451 20:38:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:21.451 20:38:04 -- common/autotest_common.sh@852 -- # return 0 00:11:21.451 20:38:04 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:11:21.710 Base_1 00:11:21.710 20:38:05 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:11:21.710 Base_2 00:11:21.710 20:38:05 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:11:21.969 [2024-04-15 20:38:05.373102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:21.969 [2024-04-15 20:38:05.374459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:21.969 [2024-04-15 20:38:05.374520] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027380 00:11:21.969 [2024-04-15 20:38:05.374529] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:21.969 [2024-04-15 20:38:05.374661] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:11:21.969 [2024-04-15 20:38:05.374838] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027380 00:11:21.969 [2024-04-15 20:38:05.374847] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000027380 00:11:21.969 [2024-04-15 20:38:05.374956] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.969 20:38:05 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:11:22.229 [2024-04-15 20:38:05.540852] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:22.229 [2024-04-15 20:38:05.540876] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:22.229 true 00:11:22.229 20:38:05 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:11:22.229 20:38:05 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:22.229 [2024-04-15 20:38:05.724798] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.488 20:38:05 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:11:22.488 20:38:05 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:11:22.489 20:38:05 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:11:22.489 20:38:05 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:11:22.489 [2024-04-15 20:38:05.912382] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:22.489 [2024-04-15 20:38:05.912418] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:22.489 [2024-04-15 20:38:05.912462] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:11:22.489 [2024-04-15 20:38:05.912510] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:22.489 true 00:11:22.489 20:38:05 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:11:22.489 20:38:05 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:22.748 [2024-04-15 20:38:06.088212] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.748 20:38:06 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:11:22.748 20:38:06 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:11:22.748 20:38:06 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:11:22.748 20:38:06 -- bdev/bdev_raid.sh@332 -- # killprocess 46184 00:11:22.748 20:38:06 -- common/autotest_common.sh@926 -- # '[' -z 46184 ']' 00:11:22.748 20:38:06 -- common/autotest_common.sh@930 -- # kill -0 46184 00:11:22.748 20:38:06 -- common/autotest_common.sh@931 -- # uname 00:11:22.748 20:38:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:22.748 20:38:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 46184 00:11:22.748 killing process with pid 46184 00:11:22.748 20:38:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:22.748 20:38:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:22.748 20:38:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46184' 00:11:22.748 20:38:06 -- common/autotest_common.sh@945 -- # kill 46184 00:11:22.748 20:38:06 -- common/autotest_common.sh@950 -- # wait 46184 00:11:22.748 [2024-04-15 20:38:06.131439] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.748 [2024-04-15 20:38:06.131512] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.748 [2024-04-15 20:38:06.131542] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.748 [2024-04-15 20:38:06.131550] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Raid, state offline 00:11:22.748 [2024-04-15 20:38:06.132126] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@334 -- # return 0 00:11:24.126 00:11:24.126 real 0m3.419s 00:11:24.126 user 0m4.473s 00:11:24.126 sys 0m0.423s 00:11:24.126 20:38:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.126 20:38:07 -- common/autotest_common.sh@10 -- # set +x 00:11:24.126 ************************************ 00:11:24.126 END TEST raid0_resize_test 00:11:24.126 ************************************ 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:11:24.126 20:38:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:24.126 20:38:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:24.126 20:38:07 -- common/autotest_common.sh@10 -- # set +x 00:11:24.126 ************************************ 00:11:24.126 START TEST raid_state_function_test 00:11:24.126 ************************************ 00:11:24.126 20:38:07 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:24.126 20:38:07 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:24.127 20:38:07 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:24.127 20:38:07 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:11:24.127 20:38:07 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:11:24.127 20:38:07 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:11:24.127 20:38:07 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:11:24.127 20:38:07 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:11:24.127 Process raid pid: 46266 00:11:24.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:24.127 20:38:07 -- bdev/bdev_raid.sh@226 -- # raid_pid=46266 00:11:24.127 20:38:07 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 46266' 00:11:24.127 20:38:07 -- bdev/bdev_raid.sh@228 -- # waitforlisten 46266 /var/tmp/spdk-raid.sock 00:11:24.127 20:38:07 -- common/autotest_common.sh@819 -- # '[' -z 46266 ']' 00:11:24.127 20:38:07 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:24.127 20:38:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:24.127 20:38:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:24.127 20:38:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:24.127 20:38:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:24.127 20:38:07 -- common/autotest_common.sh@10 -- # set +x 00:11:24.386 [2024-04-15 20:38:07.628463] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:11:24.386 [2024-04-15 20:38:07.628615] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.386 [2024-04-15 20:38:07.783320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.645 [2024-04-15 20:38:07.975621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.904 [2024-04-15 20:38:08.175657] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.842 20:38:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:25.842 20:38:08 -- common/autotest_common.sh@852 -- # return 0 00:11:25.842 20:38:08 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:25.842 [2024-04-15 20:38:09.167075] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.842 [2024-04-15 20:38:09.167149] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.842 [2024-04-15 20:38:09.167160] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.842 [2024-04-15 20:38:09.167178] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.842 20:38:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:25.842 20:38:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:25.842 20:38:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:25.843 20:38:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:25.843 20:38:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:25.843 20:38:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:25.843 20:38:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:25.843 20:38:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:25.843 20:38:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:25.843 20:38:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:25.843 20:38:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.843 20:38:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.102 20:38:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:26.102 "name": "Existed_Raid", 00:11:26.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.102 "strip_size_kb": 64, 00:11:26.102 "state": "configuring", 00:11:26.102 "raid_level": "raid0", 00:11:26.102 "superblock": false, 00:11:26.102 "num_base_bdevs": 2, 00:11:26.102 "num_base_bdevs_discovered": 0, 00:11:26.102 "num_base_bdevs_operational": 2, 00:11:26.102 "base_bdevs_list": [ 00:11:26.102 { 00:11:26.102 "name": "BaseBdev1", 00:11:26.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.102 "is_configured": false, 00:11:26.102 "data_offset": 0, 00:11:26.102 "data_size": 0 00:11:26.102 }, 00:11:26.102 { 00:11:26.102 "name": "BaseBdev2", 00:11:26.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.102 "is_configured": false, 00:11:26.102 "data_offset": 0, 00:11:26.102 "data_size": 0 00:11:26.102 } 00:11:26.102 ] 00:11:26.102 }' 00:11:26.102 20:38:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:26.102 20:38:09 -- common/autotest_common.sh@10 -- # set +x 00:11:26.670 20:38:09 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:26.670 [2024-04-15 20:38:10.009711] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.670 [2024-04-15 20:38:10.009746] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:11:26.670 20:38:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:26.670 [2024-04-15 20:38:10.165531] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.670 [2024-04-15 20:38:10.165620] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.670 [2024-04-15 20:38:10.165632] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.670 [2024-04-15 20:38:10.165848] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.929 20:38:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.929 BaseBdev1 00:11:26.929 [2024-04-15 20:38:10.376683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.929 20:38:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:26.929 20:38:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:26.929 20:38:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:26.929 20:38:10 -- common/autotest_common.sh@889 -- # local i 00:11:26.929 20:38:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:26.929 20:38:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:26.929 20:38:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:27.188 20:38:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.447 [ 00:11:27.447 { 00:11:27.447 "name": "BaseBdev1", 00:11:27.447 "aliases": [ 00:11:27.447 "6ac80cd0-2b30-4749-884e-b1cfb878c2e4" 00:11:27.447 ], 00:11:27.447 "product_name": "Malloc disk", 00:11:27.447 "block_size": 512, 00:11:27.447 "num_blocks": 65536, 00:11:27.447 "uuid": "6ac80cd0-2b30-4749-884e-b1cfb878c2e4", 00:11:27.447 "assigned_rate_limits": { 00:11:27.447 "rw_ios_per_sec": 0, 00:11:27.447 "rw_mbytes_per_sec": 0, 00:11:27.447 "r_mbytes_per_sec": 0, 00:11:27.447 "w_mbytes_per_sec": 0 00:11:27.447 }, 00:11:27.447 "claimed": true, 00:11:27.447 "claim_type": "exclusive_write", 00:11:27.447 "zoned": false, 00:11:27.447 "supported_io_types": { 00:11:27.447 "read": true, 00:11:27.447 "write": true, 00:11:27.447 "unmap": true, 00:11:27.447 "write_zeroes": true, 00:11:27.447 "flush": true, 00:11:27.447 "reset": true, 00:11:27.447 "compare": false, 00:11:27.447 "compare_and_write": false, 00:11:27.447 "abort": true, 00:11:27.447 "nvme_admin": false, 00:11:27.447 "nvme_io": false 00:11:27.447 }, 00:11:27.447 "memory_domains": [ 00:11:27.447 { 00:11:27.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.447 "dma_device_type": 2 00:11:27.447 } 00:11:27.447 ], 00:11:27.447 "driver_specific": {} 00:11:27.447 } 00:11:27.447 ] 00:11:27.447 20:38:10 -- common/autotest_common.sh@895 -- # return 0 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:27.447 "name": "Existed_Raid", 00:11:27.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.447 "strip_size_kb": 64, 00:11:27.447 "state": "configuring", 00:11:27.447 "raid_level": "raid0", 00:11:27.447 "superblock": false, 00:11:27.447 "num_base_bdevs": 2, 00:11:27.447 "num_base_bdevs_discovered": 1, 00:11:27.447 "num_base_bdevs_operational": 2, 00:11:27.447 "base_bdevs_list": [ 00:11:27.447 { 00:11:27.447 "name": "BaseBdev1", 00:11:27.447 "uuid": "6ac80cd0-2b30-4749-884e-b1cfb878c2e4", 00:11:27.447 "is_configured": true, 00:11:27.447 "data_offset": 0, 00:11:27.447 "data_size": 65536 00:11:27.447 }, 00:11:27.447 { 00:11:27.447 "name": "BaseBdev2", 00:11:27.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.447 "is_configured": false, 00:11:27.447 "data_offset": 0, 00:11:27.447 "data_size": 0 00:11:27.447 } 00:11:27.447 ] 00:11:27.447 }' 00:11:27.447 20:38:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:27.447 20:38:10 -- common/autotest_common.sh@10 -- # set +x 00:11:28.018 20:38:11 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:28.278 [2024-04-15 20:38:11.626802] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.278 [2024-04-15 20:38:11.626852] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:11:28.278 20:38:11 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:11:28.278 20:38:11 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:28.538 [2024-04-15 20:38:11.778613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.538 [2024-04-15 20:38:11.779935] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.538 [2024-04-15 20:38:11.779989] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:28.538 "name": "Existed_Raid", 00:11:28.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.538 "strip_size_kb": 64, 00:11:28.538 "state": "configuring", 00:11:28.538 "raid_level": "raid0", 00:11:28.538 "superblock": false, 00:11:28.538 "num_base_bdevs": 2, 00:11:28.538 "num_base_bdevs_discovered": 1, 00:11:28.538 "num_base_bdevs_operational": 2, 00:11:28.538 "base_bdevs_list": [ 00:11:28.538 { 00:11:28.538 "name": "BaseBdev1", 00:11:28.538 "uuid": "6ac80cd0-2b30-4749-884e-b1cfb878c2e4", 00:11:28.538 "is_configured": true, 00:11:28.538 "data_offset": 0, 00:11:28.538 "data_size": 65536 00:11:28.538 }, 00:11:28.538 { 00:11:28.538 "name": "BaseBdev2", 00:11:28.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.538 "is_configured": false, 00:11:28.538 "data_offset": 0, 00:11:28.538 "data_size": 0 00:11:28.538 } 00:11:28.538 ] 00:11:28.538 }' 00:11:28.538 20:38:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:28.538 20:38:11 -- common/autotest_common.sh@10 -- # set +x 00:11:29.105 20:38:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:29.364 [2024-04-15 20:38:12.695399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.364 [2024-04-15 20:38:12.695440] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027f80 00:11:29.364 [2024-04-15 20:38:12.695448] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:29.364 [2024-04-15 20:38:12.695551] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:11:29.364 BaseBdev2 00:11:29.364 [2024-04-15 20:38:12.695976] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027f80 00:11:29.364 [2024-04-15 20:38:12.695996] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027f80 00:11:29.364 [2024-04-15 20:38:12.696189] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.364 20:38:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:29.364 20:38:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:29.364 20:38:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:29.364 20:38:12 -- common/autotest_common.sh@889 -- # local i 00:11:29.364 20:38:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:29.364 20:38:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:29.364 20:38:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:29.364 20:38:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:29.623 [ 00:11:29.623 { 00:11:29.623 "name": "BaseBdev2", 00:11:29.623 "aliases": [ 00:11:29.623 "399472ad-5e0b-4190-a808-6497f67c3373" 00:11:29.623 ], 00:11:29.623 "product_name": "Malloc disk", 00:11:29.623 "block_size": 512, 00:11:29.623 "num_blocks": 65536, 00:11:29.623 "uuid": "399472ad-5e0b-4190-a808-6497f67c3373", 00:11:29.623 "assigned_rate_limits": { 00:11:29.623 "rw_ios_per_sec": 0, 00:11:29.623 "rw_mbytes_per_sec": 0, 00:11:29.623 "r_mbytes_per_sec": 0, 00:11:29.623 "w_mbytes_per_sec": 0 00:11:29.623 }, 00:11:29.623 "claimed": true, 00:11:29.623 "claim_type": "exclusive_write", 00:11:29.623 "zoned": false, 00:11:29.623 "supported_io_types": { 00:11:29.623 "read": true, 00:11:29.623 "write": true, 00:11:29.623 "unmap": true, 00:11:29.623 "write_zeroes": true, 00:11:29.623 "flush": true, 00:11:29.623 "reset": true, 00:11:29.623 "compare": false, 00:11:29.623 "compare_and_write": false, 00:11:29.623 "abort": true, 00:11:29.623 "nvme_admin": false, 00:11:29.623 "nvme_io": false 00:11:29.623 }, 00:11:29.623 "memory_domains": [ 00:11:29.623 { 00:11:29.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.623 "dma_device_type": 2 00:11:29.623 } 00:11:29.623 ], 00:11:29.623 "driver_specific": {} 00:11:29.623 } 00:11:29.623 ] 00:11:29.623 20:38:13 -- common/autotest_common.sh@895 -- # return 0 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.623 20:38:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.882 20:38:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:29.882 "name": "Existed_Raid", 00:11:29.882 "uuid": "4e4d38ce-de86-492b-a4b0-c264e4ab793d", 00:11:29.882 "strip_size_kb": 64, 00:11:29.882 "state": "online", 00:11:29.882 "raid_level": "raid0", 00:11:29.882 "superblock": false, 00:11:29.882 "num_base_bdevs": 2, 00:11:29.882 "num_base_bdevs_discovered": 2, 00:11:29.882 "num_base_bdevs_operational": 2, 00:11:29.882 "base_bdevs_list": [ 00:11:29.882 { 00:11:29.882 "name": "BaseBdev1", 00:11:29.882 "uuid": "6ac80cd0-2b30-4749-884e-b1cfb878c2e4", 00:11:29.882 "is_configured": true, 00:11:29.882 "data_offset": 0, 00:11:29.883 "data_size": 65536 00:11:29.883 }, 00:11:29.883 { 00:11:29.883 "name": "BaseBdev2", 00:11:29.883 "uuid": "399472ad-5e0b-4190-a808-6497f67c3373", 00:11:29.883 "is_configured": true, 00:11:29.883 "data_offset": 0, 00:11:29.883 "data_size": 65536 00:11:29.883 } 00:11:29.883 ] 00:11:29.883 }' 00:11:29.883 20:38:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:29.883 20:38:13 -- common/autotest_common.sh@10 -- # set +x 00:11:30.450 20:38:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:30.450 [2024-04-15 20:38:13.901634] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.450 [2024-04-15 20:38:13.901674] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.450 [2024-04-15 20:38:13.901717] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:30.709 "name": "Existed_Raid", 00:11:30.709 "uuid": "4e4d38ce-de86-492b-a4b0-c264e4ab793d", 00:11:30.709 "strip_size_kb": 64, 00:11:30.709 "state": "offline", 00:11:30.709 "raid_level": "raid0", 00:11:30.709 "superblock": false, 00:11:30.709 "num_base_bdevs": 2, 00:11:30.709 "num_base_bdevs_discovered": 1, 00:11:30.709 "num_base_bdevs_operational": 1, 00:11:30.709 "base_bdevs_list": [ 00:11:30.709 { 00:11:30.709 "name": null, 00:11:30.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.709 "is_configured": false, 00:11:30.709 "data_offset": 0, 00:11:30.709 "data_size": 65536 00:11:30.709 }, 00:11:30.709 { 00:11:30.709 "name": "BaseBdev2", 00:11:30.709 "uuid": "399472ad-5e0b-4190-a808-6497f67c3373", 00:11:30.709 "is_configured": true, 00:11:30.709 "data_offset": 0, 00:11:30.709 "data_size": 65536 00:11:30.709 } 00:11:30.709 ] 00:11:30.709 }' 00:11:30.709 20:38:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:30.709 20:38:14 -- common/autotest_common.sh@10 -- # set +x 00:11:31.277 20:38:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:31.277 20:38:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:31.277 20:38:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.277 20:38:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:31.535 20:38:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:31.535 20:38:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.535 20:38:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:31.794 [2024-04-15 20:38:15.040033] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.794 [2024-04-15 20:38:15.040088] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027f80 name Existed_Raid, state offline 00:11:31.794 20:38:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:31.794 20:38:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:31.794 20:38:15 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:31.794 20:38:15 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.053 20:38:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:32.053 20:38:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:32.053 20:38:15 -- bdev/bdev_raid.sh@287 -- # killprocess 46266 00:11:32.053 20:38:15 -- common/autotest_common.sh@926 -- # '[' -z 46266 ']' 00:11:32.053 20:38:15 -- common/autotest_common.sh@930 -- # kill -0 46266 00:11:32.053 20:38:15 -- common/autotest_common.sh@931 -- # uname 00:11:32.053 20:38:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:32.053 20:38:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 46266 00:11:32.053 killing process with pid 46266 00:11:32.053 20:38:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:32.053 20:38:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:32.053 20:38:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46266' 00:11:32.053 20:38:15 -- common/autotest_common.sh@945 -- # kill 46266 00:11:32.053 [2024-04-15 20:38:15.338271] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.053 20:38:15 -- common/autotest_common.sh@950 -- # wait 46266 00:11:32.053 [2024-04-15 20:38:15.338357] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:33.470 ************************************ 00:11:33.470 END TEST raid_state_function_test 00:11:33.470 ************************************ 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:33.470 00:11:33.470 real 0m9.118s 00:11:33.470 user 0m14.947s 00:11:33.470 sys 0m1.071s 00:11:33.470 20:38:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.470 20:38:16 -- common/autotest_common.sh@10 -- # set +x 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:11:33.470 20:38:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:33.470 20:38:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:33.470 20:38:16 -- common/autotest_common.sh@10 -- # set +x 00:11:33.470 ************************************ 00:11:33.470 START TEST raid_state_function_test_sb 00:11:33.470 ************************************ 00:11:33.470 20:38:16 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:11:33.470 Process raid pid: 46584 00:11:33.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=46584 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 46584' 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 46584 /var/tmp/spdk-raid.sock 00:11:33.470 20:38:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:33.470 20:38:16 -- common/autotest_common.sh@819 -- # '[' -z 46584 ']' 00:11:33.470 20:38:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:33.471 20:38:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:33.471 20:38:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:33.471 20:38:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:33.471 20:38:16 -- common/autotest_common.sh@10 -- # set +x 00:11:33.471 [2024-04-15 20:38:16.803187] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:11:33.471 [2024-04-15 20:38:16.803334] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.731 [2024-04-15 20:38:16.968635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.731 [2024-04-15 20:38:17.155968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.990 [2024-04-15 20:38:17.343013] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.928 20:38:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:34.928 20:38:18 -- common/autotest_common.sh@852 -- # return 0 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:34.928 [2024-04-15 20:38:18.367025] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.928 [2024-04-15 20:38:18.367127] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.928 [2024-04-15 20:38:18.367147] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.928 [2024-04-15 20:38:18.367172] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.928 20:38:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.188 20:38:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:35.188 "name": "Existed_Raid", 00:11:35.188 "uuid": "2121aa79-7d82-458d-84de-94be73dc6a93", 00:11:35.188 "strip_size_kb": 64, 00:11:35.188 "state": "configuring", 00:11:35.188 "raid_level": "raid0", 00:11:35.188 "superblock": true, 00:11:35.188 "num_base_bdevs": 2, 00:11:35.188 "num_base_bdevs_discovered": 0, 00:11:35.188 "num_base_bdevs_operational": 2, 00:11:35.188 "base_bdevs_list": [ 00:11:35.188 { 00:11:35.188 "name": "BaseBdev1", 00:11:35.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.188 "is_configured": false, 00:11:35.188 "data_offset": 0, 00:11:35.188 "data_size": 0 00:11:35.188 }, 00:11:35.188 { 00:11:35.188 "name": "BaseBdev2", 00:11:35.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.188 "is_configured": false, 00:11:35.188 "data_offset": 0, 00:11:35.188 "data_size": 0 00:11:35.188 } 00:11:35.188 ] 00:11:35.188 }' 00:11:35.188 20:38:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:35.188 20:38:18 -- common/autotest_common.sh@10 -- # set +x 00:11:35.757 20:38:19 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:35.757 [2024-04-15 20:38:19.233487] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.757 [2024-04-15 20:38:19.233533] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:11:35.757 20:38:19 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:36.016 [2024-04-15 20:38:19.401370] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.016 [2024-04-15 20:38:19.401454] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.016 [2024-04-15 20:38:19.401465] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.016 [2024-04-15 20:38:19.401487] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.016 20:38:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.275 [2024-04-15 20:38:19.593529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.275 BaseBdev1 00:11:36.275 20:38:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:36.275 20:38:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:36.275 20:38:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:36.275 20:38:19 -- common/autotest_common.sh@889 -- # local i 00:11:36.275 20:38:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:36.275 20:38:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:36.275 20:38:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:36.275 20:38:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.534 [ 00:11:36.535 { 00:11:36.535 "name": "BaseBdev1", 00:11:36.535 "aliases": [ 00:11:36.535 "2032dd2c-cbee-408d-862a-664633e043e0" 00:11:36.535 ], 00:11:36.535 "product_name": "Malloc disk", 00:11:36.535 "block_size": 512, 00:11:36.535 "num_blocks": 65536, 00:11:36.535 "uuid": "2032dd2c-cbee-408d-862a-664633e043e0", 00:11:36.535 "assigned_rate_limits": { 00:11:36.535 "rw_ios_per_sec": 0, 00:11:36.535 "rw_mbytes_per_sec": 0, 00:11:36.535 "r_mbytes_per_sec": 0, 00:11:36.535 "w_mbytes_per_sec": 0 00:11:36.535 }, 00:11:36.535 "claimed": true, 00:11:36.535 "claim_type": "exclusive_write", 00:11:36.535 "zoned": false, 00:11:36.535 "supported_io_types": { 00:11:36.535 "read": true, 00:11:36.535 "write": true, 00:11:36.535 "unmap": true, 00:11:36.535 "write_zeroes": true, 00:11:36.535 "flush": true, 00:11:36.535 "reset": true, 00:11:36.535 "compare": false, 00:11:36.535 "compare_and_write": false, 00:11:36.535 "abort": true, 00:11:36.535 "nvme_admin": false, 00:11:36.535 "nvme_io": false 00:11:36.535 }, 00:11:36.535 "memory_domains": [ 00:11:36.535 { 00:11:36.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.535 "dma_device_type": 2 00:11:36.535 } 00:11:36.535 ], 00:11:36.535 "driver_specific": {} 00:11:36.535 } 00:11:36.535 ] 00:11:36.535 20:38:19 -- common/autotest_common.sh@895 -- # return 0 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.535 20:38:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.794 20:38:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:36.795 "name": "Existed_Raid", 00:11:36.795 "uuid": "b5441d27-4a23-4d08-b3c6-d0f59d90a189", 00:11:36.795 "strip_size_kb": 64, 00:11:36.795 "state": "configuring", 00:11:36.795 "raid_level": "raid0", 00:11:36.795 "superblock": true, 00:11:36.795 "num_base_bdevs": 2, 00:11:36.795 "num_base_bdevs_discovered": 1, 00:11:36.795 "num_base_bdevs_operational": 2, 00:11:36.795 "base_bdevs_list": [ 00:11:36.795 { 00:11:36.795 "name": "BaseBdev1", 00:11:36.795 "uuid": "2032dd2c-cbee-408d-862a-664633e043e0", 00:11:36.795 "is_configured": true, 00:11:36.795 "data_offset": 2048, 00:11:36.795 "data_size": 63488 00:11:36.795 }, 00:11:36.795 { 00:11:36.795 "name": "BaseBdev2", 00:11:36.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.795 "is_configured": false, 00:11:36.795 "data_offset": 0, 00:11:36.795 "data_size": 0 00:11:36.795 } 00:11:36.795 ] 00:11:36.795 }' 00:11:36.795 20:38:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:36.795 20:38:20 -- common/autotest_common.sh@10 -- # set +x 00:11:37.375 20:38:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:37.375 [2024-04-15 20:38:20.815806] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:37.376 [2024-04-15 20:38:20.815858] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:11:37.376 20:38:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:11:37.376 20:38:20 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:37.635 20:38:21 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:37.894 BaseBdev1 00:11:37.894 20:38:21 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:11:37.894 20:38:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:37.894 20:38:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:37.894 20:38:21 -- common/autotest_common.sh@889 -- # local i 00:11:37.894 20:38:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:37.894 20:38:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:37.894 20:38:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.153 20:38:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.153 [ 00:11:38.153 { 00:11:38.153 "name": "BaseBdev1", 00:11:38.153 "aliases": [ 00:11:38.153 "b1d06f8a-7b78-46ef-850e-5371717c7419" 00:11:38.153 ], 00:11:38.153 "product_name": "Malloc disk", 00:11:38.153 "block_size": 512, 00:11:38.153 "num_blocks": 65536, 00:11:38.153 "uuid": "b1d06f8a-7b78-46ef-850e-5371717c7419", 00:11:38.153 "assigned_rate_limits": { 00:11:38.153 "rw_ios_per_sec": 0, 00:11:38.153 "rw_mbytes_per_sec": 0, 00:11:38.153 "r_mbytes_per_sec": 0, 00:11:38.153 "w_mbytes_per_sec": 0 00:11:38.153 }, 00:11:38.153 "claimed": false, 00:11:38.153 "zoned": false, 00:11:38.153 "supported_io_types": { 00:11:38.153 "read": true, 00:11:38.153 "write": true, 00:11:38.153 "unmap": true, 00:11:38.153 "write_zeroes": true, 00:11:38.153 "flush": true, 00:11:38.153 "reset": true, 00:11:38.153 "compare": false, 00:11:38.153 "compare_and_write": false, 00:11:38.153 "abort": true, 00:11:38.153 "nvme_admin": false, 00:11:38.153 "nvme_io": false 00:11:38.153 }, 00:11:38.153 "memory_domains": [ 00:11:38.153 { 00:11:38.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.153 "dma_device_type": 2 00:11:38.153 } 00:11:38.153 ], 00:11:38.153 "driver_specific": {} 00:11:38.153 } 00:11:38.153 ] 00:11:38.153 20:38:21 -- common/autotest_common.sh@895 -- # return 0 00:11:38.153 20:38:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:38.413 [2024-04-15 20:38:21.741658] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.413 [2024-04-15 20:38:21.743048] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:38.413 [2024-04-15 20:38:21.743107] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.413 20:38:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.673 20:38:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:38.673 "name": "Existed_Raid", 00:11:38.673 "uuid": "e6f9c4bf-4742-4fc0-9f59-b8d66a475d19", 00:11:38.673 "strip_size_kb": 64, 00:11:38.673 "state": "configuring", 00:11:38.673 "raid_level": "raid0", 00:11:38.673 "superblock": true, 00:11:38.673 "num_base_bdevs": 2, 00:11:38.673 "num_base_bdevs_discovered": 1, 00:11:38.673 "num_base_bdevs_operational": 2, 00:11:38.673 "base_bdevs_list": [ 00:11:38.673 { 00:11:38.673 "name": "BaseBdev1", 00:11:38.673 "uuid": "b1d06f8a-7b78-46ef-850e-5371717c7419", 00:11:38.673 "is_configured": true, 00:11:38.673 "data_offset": 2048, 00:11:38.673 "data_size": 63488 00:11:38.673 }, 00:11:38.673 { 00:11:38.673 "name": "BaseBdev2", 00:11:38.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.673 "is_configured": false, 00:11:38.673 "data_offset": 0, 00:11:38.673 "data_size": 0 00:11:38.673 } 00:11:38.673 ] 00:11:38.673 }' 00:11:38.673 20:38:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:38.673 20:38:21 -- common/autotest_common.sh@10 -- # set +x 00:11:38.933 20:38:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:39.193 [2024-04-15 20:38:22.612406] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.193 [2024-04-15 20:38:22.612549] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:11:39.193 [2024-04-15 20:38:22.612560] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:39.193 BaseBdev2 00:11:39.193 [2024-04-15 20:38:22.612638] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:11:39.193 [2024-04-15 20:38:22.613274] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:11:39.193 [2024-04-15 20:38:22.613288] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:11:39.193 [2024-04-15 20:38:22.613384] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.193 20:38:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:39.193 20:38:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:39.193 20:38:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:39.193 20:38:22 -- common/autotest_common.sh@889 -- # local i 00:11:39.193 20:38:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:39.193 20:38:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:39.193 20:38:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:39.451 20:38:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:39.451 [ 00:11:39.451 { 00:11:39.451 "name": "BaseBdev2", 00:11:39.451 "aliases": [ 00:11:39.451 "3cdddbde-d73c-439d-a410-0806dda81047" 00:11:39.451 ], 00:11:39.451 "product_name": "Malloc disk", 00:11:39.451 "block_size": 512, 00:11:39.451 "num_blocks": 65536, 00:11:39.451 "uuid": "3cdddbde-d73c-439d-a410-0806dda81047", 00:11:39.451 "assigned_rate_limits": { 00:11:39.451 "rw_ios_per_sec": 0, 00:11:39.451 "rw_mbytes_per_sec": 0, 00:11:39.451 "r_mbytes_per_sec": 0, 00:11:39.451 "w_mbytes_per_sec": 0 00:11:39.451 }, 00:11:39.451 "claimed": true, 00:11:39.451 "claim_type": "exclusive_write", 00:11:39.451 "zoned": false, 00:11:39.451 "supported_io_types": { 00:11:39.451 "read": true, 00:11:39.451 "write": true, 00:11:39.451 "unmap": true, 00:11:39.451 "write_zeroes": true, 00:11:39.451 "flush": true, 00:11:39.451 "reset": true, 00:11:39.451 "compare": false, 00:11:39.451 "compare_and_write": false, 00:11:39.451 "abort": true, 00:11:39.451 "nvme_admin": false, 00:11:39.451 "nvme_io": false 00:11:39.451 }, 00:11:39.451 "memory_domains": [ 00:11:39.451 { 00:11:39.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.452 "dma_device_type": 2 00:11:39.452 } 00:11:39.452 ], 00:11:39.452 "driver_specific": {} 00:11:39.452 } 00:11:39.452 ] 00:11:39.710 20:38:22 -- common/autotest_common.sh@895 -- # return 0 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:39.710 20:38:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:39.711 20:38:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:39.711 20:38:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.711 20:38:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.711 20:38:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:39.711 "name": "Existed_Raid", 00:11:39.711 "uuid": "e6f9c4bf-4742-4fc0-9f59-b8d66a475d19", 00:11:39.711 "strip_size_kb": 64, 00:11:39.711 "state": "online", 00:11:39.711 "raid_level": "raid0", 00:11:39.711 "superblock": true, 00:11:39.711 "num_base_bdevs": 2, 00:11:39.711 "num_base_bdevs_discovered": 2, 00:11:39.711 "num_base_bdevs_operational": 2, 00:11:39.711 "base_bdevs_list": [ 00:11:39.711 { 00:11:39.711 "name": "BaseBdev1", 00:11:39.711 "uuid": "b1d06f8a-7b78-46ef-850e-5371717c7419", 00:11:39.711 "is_configured": true, 00:11:39.711 "data_offset": 2048, 00:11:39.711 "data_size": 63488 00:11:39.711 }, 00:11:39.711 { 00:11:39.711 "name": "BaseBdev2", 00:11:39.711 "uuid": "3cdddbde-d73c-439d-a410-0806dda81047", 00:11:39.711 "is_configured": true, 00:11:39.711 "data_offset": 2048, 00:11:39.711 "data_size": 63488 00:11:39.711 } 00:11:39.711 ] 00:11:39.711 }' 00:11:39.711 20:38:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:39.711 20:38:23 -- common/autotest_common.sh@10 -- # set +x 00:11:40.277 20:38:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:40.277 [2024-04-15 20:38:23.762760] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:40.277 [2024-04-15 20:38:23.762794] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.277 [2024-04-15 20:38:23.762831] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.536 20:38:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.795 20:38:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:40.795 "name": "Existed_Raid", 00:11:40.795 "uuid": "e6f9c4bf-4742-4fc0-9f59-b8d66a475d19", 00:11:40.795 "strip_size_kb": 64, 00:11:40.795 "state": "offline", 00:11:40.795 "raid_level": "raid0", 00:11:40.795 "superblock": true, 00:11:40.795 "num_base_bdevs": 2, 00:11:40.795 "num_base_bdevs_discovered": 1, 00:11:40.795 "num_base_bdevs_operational": 1, 00:11:40.795 "base_bdevs_list": [ 00:11:40.795 { 00:11:40.795 "name": null, 00:11:40.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.795 "is_configured": false, 00:11:40.795 "data_offset": 2048, 00:11:40.795 "data_size": 63488 00:11:40.795 }, 00:11:40.795 { 00:11:40.795 "name": "BaseBdev2", 00:11:40.795 "uuid": "3cdddbde-d73c-439d-a410-0806dda81047", 00:11:40.795 "is_configured": true, 00:11:40.795 "data_offset": 2048, 00:11:40.795 "data_size": 63488 00:11:40.795 } 00:11:40.795 ] 00:11:40.795 }' 00:11:40.795 20:38:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:40.795 20:38:24 -- common/autotest_common.sh@10 -- # set +x 00:11:41.363 20:38:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:41.363 20:38:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:41.363 20:38:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:41.363 20:38:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.622 20:38:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:41.622 20:38:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:41.622 20:38:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:41.622 [2024-04-15 20:38:25.100680] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:41.622 [2024-04-15 20:38:25.100745] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:11:41.881 20:38:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:41.881 20:38:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:41.882 20:38:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.882 20:38:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:42.141 20:38:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:42.141 20:38:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:42.141 20:38:25 -- bdev/bdev_raid.sh@287 -- # killprocess 46584 00:11:42.141 20:38:25 -- common/autotest_common.sh@926 -- # '[' -z 46584 ']' 00:11:42.141 20:38:25 -- common/autotest_common.sh@930 -- # kill -0 46584 00:11:42.141 20:38:25 -- common/autotest_common.sh@931 -- # uname 00:11:42.141 20:38:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:42.141 20:38:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 46584 00:11:42.141 killing process with pid 46584 00:11:42.141 20:38:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:42.141 20:38:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:42.141 20:38:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46584' 00:11:42.141 20:38:25 -- common/autotest_common.sh@945 -- # kill 46584 00:11:42.141 20:38:25 -- common/autotest_common.sh@950 -- # wait 46584 00:11:42.141 [2024-04-15 20:38:25.433885] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.141 [2024-04-15 20:38:25.433994] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.521 ************************************ 00:11:43.521 END TEST raid_state_function_test_sb 00:11:43.521 ************************************ 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:43.521 00:11:43.521 real 0m10.033s 00:11:43.521 user 0m16.456s 00:11:43.521 sys 0m1.280s 00:11:43.521 20:38:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.521 20:38:26 -- common/autotest_common.sh@10 -- # set +x 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:11:43.521 20:38:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:43.521 20:38:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:43.521 20:38:26 -- common/autotest_common.sh@10 -- # set +x 00:11:43.521 ************************************ 00:11:43.521 START TEST raid_superblock_test 00:11:43.521 ************************************ 00:11:43.521 20:38:26 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@357 -- # raid_pid=46909 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@358 -- # waitforlisten 46909 /var/tmp/spdk-raid.sock 00:11:43.521 20:38:26 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:43.521 20:38:26 -- common/autotest_common.sh@819 -- # '[' -z 46909 ']' 00:11:43.521 20:38:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:43.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:43.521 20:38:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:43.521 20:38:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:43.521 20:38:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:43.521 20:38:26 -- common/autotest_common.sh@10 -- # set +x 00:11:43.521 [2024-04-15 20:38:26.903424] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:11:43.521 [2024-04-15 20:38:26.903577] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46909 ] 00:11:43.781 [2024-04-15 20:38:27.060514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.781 [2024-04-15 20:38:27.249391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.040 [2024-04-15 20:38:27.439686] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.299 20:38:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:44.299 20:38:27 -- common/autotest_common.sh@852 -- # return 0 00:11:44.299 20:38:27 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:11:44.299 20:38:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:44.299 20:38:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:11:44.299 20:38:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:11:44.299 20:38:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:44.299 20:38:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.299 20:38:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.299 20:38:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.299 20:38:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:44.557 malloc1 00:11:44.557 20:38:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:44.557 [2024-04-15 20:38:27.998175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:44.557 [2024-04-15 20:38:27.998268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.557 [2024-04-15 20:38:27.998314] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:11:44.557 [2024-04-15 20:38:27.998352] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.557 [2024-04-15 20:38:28.000029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.557 [2024-04-15 20:38:28.000072] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:44.557 pt1 00:11:44.557 20:38:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:44.557 20:38:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:44.557 20:38:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:11:44.557 20:38:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:11:44.557 20:38:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:44.557 20:38:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.557 20:38:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.557 20:38:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.557 20:38:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:44.815 malloc2 00:11:44.815 20:38:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.075 [2024-04-15 20:38:28.374324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.075 [2024-04-15 20:38:28.374407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.075 [2024-04-15 20:38:28.374446] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:11:45.075 [2024-04-15 20:38:28.374483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.075 [2024-04-15 20:38:28.376121] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.075 [2024-04-15 20:38:28.376159] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.075 pt2 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:11:45.075 [2024-04-15 20:38:28.542177] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.075 [2024-04-15 20:38:28.545129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.075 [2024-04-15 20:38:28.545445] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002a380 00:11:45.075 [2024-04-15 20:38:28.545482] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:45.075 [2024-04-15 20:38:28.545786] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:11:45.075 [2024-04-15 20:38:28.546329] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002a380 00:11:45.075 [2024-04-15 20:38:28.546363] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002a380 00:11:45.075 [2024-04-15 20:38:28.546708] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.075 20:38:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.335 20:38:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:45.335 "name": "raid_bdev1", 00:11:45.335 "uuid": "2923f5ed-61b3-4537-98ee-ec401c0eb6b0", 00:11:45.335 "strip_size_kb": 64, 00:11:45.335 "state": "online", 00:11:45.335 "raid_level": "raid0", 00:11:45.335 "superblock": true, 00:11:45.335 "num_base_bdevs": 2, 00:11:45.335 "num_base_bdevs_discovered": 2, 00:11:45.335 "num_base_bdevs_operational": 2, 00:11:45.335 "base_bdevs_list": [ 00:11:45.335 { 00:11:45.335 "name": "pt1", 00:11:45.335 "uuid": "91da4500-a363-50d4-be94-01a832683c2f", 00:11:45.335 "is_configured": true, 00:11:45.335 "data_offset": 2048, 00:11:45.335 "data_size": 63488 00:11:45.335 }, 00:11:45.335 { 00:11:45.335 "name": "pt2", 00:11:45.335 "uuid": "2a0f0c5b-ba47-56ea-8371-85d3ed6daf66", 00:11:45.335 "is_configured": true, 00:11:45.335 "data_offset": 2048, 00:11:45.335 "data_size": 63488 00:11:45.335 } 00:11:45.335 ] 00:11:45.335 }' 00:11:45.335 20:38:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:45.335 20:38:28 -- common/autotest_common.sh@10 -- # set +x 00:11:45.902 20:38:29 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:11:45.902 20:38:29 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:46.161 [2024-04-15 20:38:29.405431] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.161 20:38:29 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=2923f5ed-61b3-4537-98ee-ec401c0eb6b0 00:11:46.161 20:38:29 -- bdev/bdev_raid.sh@380 -- # '[' -z 2923f5ed-61b3-4537-98ee-ec401c0eb6b0 ']' 00:11:46.161 20:38:29 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:46.161 [2024-04-15 20:38:29.557044] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.161 [2024-04-15 20:38:29.557075] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.161 [2024-04-15 20:38:29.557133] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.161 [2024-04-15 20:38:29.557161] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.161 [2024-04-15 20:38:29.557169] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a380 name raid_bdev1, state offline 00:11:46.161 20:38:29 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:11:46.161 20:38:29 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.420 20:38:29 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:11:46.420 20:38:29 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:11:46.420 20:38:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.420 20:38:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:46.420 20:38:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.420 20:38:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:46.680 20:38:30 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:46.680 20:38:30 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:46.939 20:38:30 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:11:46.939 20:38:30 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:11:46.939 20:38:30 -- common/autotest_common.sh@640 -- # local es=0 00:11:46.939 20:38:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:11:46.939 20:38:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.939 20:38:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.939 20:38:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.939 20:38:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.939 20:38:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.939 20:38:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.939 20:38:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.939 20:38:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:46.939 20:38:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:11:46.939 [2024-04-15 20:38:30.415932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:46.939 [2024-04-15 20:38:30.417410] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:46.939 [2024-04-15 20:38:30.417454] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:11:46.939 [2024-04-15 20:38:30.417508] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:11:46.939 [2024-04-15 20:38:30.417534] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.939 [2024-04-15 20:38:30.417543] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a980 name raid_bdev1, state configuring 00:11:46.939 request: 00:11:46.939 { 00:11:46.939 "name": "raid_bdev1", 00:11:46.939 "raid_level": "raid0", 00:11:46.939 "base_bdevs": [ 00:11:46.939 "malloc1", 00:11:46.939 "malloc2" 00:11:46.939 ], 00:11:46.939 "superblock": false, 00:11:46.939 "strip_size_kb": 64, 00:11:46.939 "method": "bdev_raid_create", 00:11:46.939 "req_id": 1 00:11:46.939 } 00:11:46.939 Got JSON-RPC error response 00:11:46.939 response: 00:11:46.939 { 00:11:46.939 "code": -17, 00:11:46.939 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:46.939 } 00:11:46.939 20:38:30 -- common/autotest_common.sh@643 -- # es=1 00:11:46.939 20:38:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:46.939 20:38:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:46.939 20:38:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:46.939 20:38:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:11:46.939 20:38:30 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.198 20:38:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:11:47.198 20:38:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:11:47.198 20:38:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.458 [2024-04-15 20:38:30.771367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.458 [2024-04-15 20:38:30.771457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.458 [2024-04-15 20:38:30.771492] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:11:47.458 [2024-04-15 20:38:30.771516] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.458 [2024-04-15 20:38:30.773200] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.458 [2024-04-15 20:38:30.773242] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.458 [2024-04-15 20:38:30.773323] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:47.458 [2024-04-15 20:38:30.773371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.458 pt1 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:47.458 "name": "raid_bdev1", 00:11:47.458 "uuid": "2923f5ed-61b3-4537-98ee-ec401c0eb6b0", 00:11:47.458 "strip_size_kb": 64, 00:11:47.458 "state": "configuring", 00:11:47.458 "raid_level": "raid0", 00:11:47.458 "superblock": true, 00:11:47.458 "num_base_bdevs": 2, 00:11:47.458 "num_base_bdevs_discovered": 1, 00:11:47.458 "num_base_bdevs_operational": 2, 00:11:47.458 "base_bdevs_list": [ 00:11:47.458 { 00:11:47.458 "name": "pt1", 00:11:47.458 "uuid": "91da4500-a363-50d4-be94-01a832683c2f", 00:11:47.458 "is_configured": true, 00:11:47.458 "data_offset": 2048, 00:11:47.458 "data_size": 63488 00:11:47.458 }, 00:11:47.458 { 00:11:47.458 "name": null, 00:11:47.458 "uuid": "2a0f0c5b-ba47-56ea-8371-85d3ed6daf66", 00:11:47.458 "is_configured": false, 00:11:47.458 "data_offset": 2048, 00:11:47.458 "data_size": 63488 00:11:47.458 } 00:11:47.458 ] 00:11:47.458 }' 00:11:47.458 20:38:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:47.458 20:38:30 -- common/autotest_common.sh@10 -- # set +x 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.395 [2024-04-15 20:38:31.690087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.395 [2024-04-15 20:38:31.690183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.395 [2024-04-15 20:38:31.690225] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d380 00:11:48.395 [2024-04-15 20:38:31.690247] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.395 [2024-04-15 20:38:31.690534] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.395 [2024-04-15 20:38:31.690566] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.395 [2024-04-15 20:38:31.690636] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:48.395 [2024-04-15 20:38:31.690878] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.395 [2024-04-15 20:38:31.691060] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002cd80 00:11:48.395 [2024-04-15 20:38:31.691071] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:48.395 [2024-04-15 20:38:31.691164] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:11:48.395 [2024-04-15 20:38:31.691320] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002cd80 00:11:48.395 [2024-04-15 20:38:31.691329] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002cd80 00:11:48.395 [2024-04-15 20:38:31.691406] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.395 pt2 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:48.395 "name": "raid_bdev1", 00:11:48.395 "uuid": "2923f5ed-61b3-4537-98ee-ec401c0eb6b0", 00:11:48.395 "strip_size_kb": 64, 00:11:48.395 "state": "online", 00:11:48.395 "raid_level": "raid0", 00:11:48.395 "superblock": true, 00:11:48.395 "num_base_bdevs": 2, 00:11:48.395 "num_base_bdevs_discovered": 2, 00:11:48.395 "num_base_bdevs_operational": 2, 00:11:48.395 "base_bdevs_list": [ 00:11:48.395 { 00:11:48.395 "name": "pt1", 00:11:48.395 "uuid": "91da4500-a363-50d4-be94-01a832683c2f", 00:11:48.395 "is_configured": true, 00:11:48.395 "data_offset": 2048, 00:11:48.395 "data_size": 63488 00:11:48.395 }, 00:11:48.395 { 00:11:48.395 "name": "pt2", 00:11:48.395 "uuid": "2a0f0c5b-ba47-56ea-8371-85d3ed6daf66", 00:11:48.395 "is_configured": true, 00:11:48.395 "data_offset": 2048, 00:11:48.395 "data_size": 63488 00:11:48.395 } 00:11:48.395 ] 00:11:48.395 }' 00:11:48.395 20:38:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:48.395 20:38:31 -- common/autotest_common.sh@10 -- # set +x 00:11:48.964 20:38:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:48.964 20:38:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:11:49.223 [2024-04-15 20:38:32.540924] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:49.223 20:38:32 -- bdev/bdev_raid.sh@430 -- # '[' 2923f5ed-61b3-4537-98ee-ec401c0eb6b0 '!=' 2923f5ed-61b3-4537-98ee-ec401c0eb6b0 ']' 00:11:49.223 20:38:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:11:49.223 20:38:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:49.223 20:38:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:49.223 20:38:32 -- bdev/bdev_raid.sh@511 -- # killprocess 46909 00:11:49.223 20:38:32 -- common/autotest_common.sh@926 -- # '[' -z 46909 ']' 00:11:49.223 20:38:32 -- common/autotest_common.sh@930 -- # kill -0 46909 00:11:49.223 20:38:32 -- common/autotest_common.sh@931 -- # uname 00:11:49.223 20:38:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:49.224 20:38:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 46909 00:11:49.224 20:38:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:49.224 20:38:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:49.224 killing process with pid 46909 00:11:49.224 20:38:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 46909' 00:11:49.224 20:38:32 -- common/autotest_common.sh@945 -- # kill 46909 00:11:49.224 20:38:32 -- common/autotest_common.sh@950 -- # wait 46909 00:11:49.224 [2024-04-15 20:38:32.588995] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.224 [2024-04-15 20:38:32.589058] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.224 [2024-04-15 20:38:32.589085] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.224 [2024-04-15 20:38:32.589093] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002cd80 name raid_bdev1, state offline 00:11:49.483 [2024-04-15 20:38:32.765438] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.862 20:38:34 -- bdev/bdev_raid.sh@513 -- # return 0 00:11:50.862 00:11:50.862 real 0m7.281s 00:11:50.862 user 0m11.897s 00:11:50.862 sys 0m0.922s 00:11:50.862 20:38:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.862 20:38:34 -- common/autotest_common.sh@10 -- # set +x 00:11:50.862 ************************************ 00:11:50.862 END TEST raid_superblock_test 00:11:50.862 ************************************ 00:11:50.862 20:38:34 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:11:50.863 20:38:34 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:50.863 20:38:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.863 20:38:34 -- common/autotest_common.sh@10 -- # set +x 00:11:50.863 ************************************ 00:11:50.863 START TEST raid_state_function_test 00:11:50.863 ************************************ 00:11:50.863 20:38:34 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:50.863 Process raid pid: 47144 00:11:50.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=47144 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 47144' 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 47144 /var/tmp/spdk-raid.sock 00:11:50.863 20:38:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:50.863 20:38:34 -- common/autotest_common.sh@819 -- # '[' -z 47144 ']' 00:11:50.863 20:38:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:50.863 20:38:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:50.863 20:38:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:50.863 20:38:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:50.863 20:38:34 -- common/autotest_common.sh@10 -- # set +x 00:11:50.863 [2024-04-15 20:38:34.254438] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:11:50.863 [2024-04-15 20:38:34.254585] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.121 [2024-04-15 20:38:34.412629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.121 [2024-04-15 20:38:34.610515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.380 [2024-04-15 20:38:34.812188] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.317 20:38:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:52.317 20:38:35 -- common/autotest_common.sh@852 -- # return 0 00:11:52.317 20:38:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:52.576 [2024-04-15 20:38:35.841013] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:52.576 [2024-04-15 20:38:35.841087] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:52.576 [2024-04-15 20:38:35.841099] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:52.576 [2024-04-15 20:38:35.841115] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:52.576 20:38:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.577 20:38:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.577 20:38:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:52.577 "name": "Existed_Raid", 00:11:52.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.577 "strip_size_kb": 64, 00:11:52.577 "state": "configuring", 00:11:52.577 "raid_level": "concat", 00:11:52.577 "superblock": false, 00:11:52.577 "num_base_bdevs": 2, 00:11:52.577 "num_base_bdevs_discovered": 0, 00:11:52.577 "num_base_bdevs_operational": 2, 00:11:52.577 "base_bdevs_list": [ 00:11:52.577 { 00:11:52.577 "name": "BaseBdev1", 00:11:52.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.577 "is_configured": false, 00:11:52.577 "data_offset": 0, 00:11:52.577 "data_size": 0 00:11:52.577 }, 00:11:52.577 { 00:11:52.577 "name": "BaseBdev2", 00:11:52.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.577 "is_configured": false, 00:11:52.577 "data_offset": 0, 00:11:52.577 "data_size": 0 00:11:52.577 } 00:11:52.577 ] 00:11:52.577 }' 00:11:52.577 20:38:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:52.577 20:38:36 -- common/autotest_common.sh@10 -- # set +x 00:11:53.146 20:38:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:53.405 [2024-04-15 20:38:36.796203] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.405 [2024-04-15 20:38:36.796245] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:11:53.405 20:38:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:53.665 [2024-04-15 20:38:36.971929] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:53.665 [2024-04-15 20:38:36.972006] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:53.665 [2024-04-15 20:38:36.972016] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:53.665 [2024-04-15 20:38:36.972042] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:53.665 20:38:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:53.924 [2024-04-15 20:38:37.181981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.924 BaseBdev1 00:11:53.924 20:38:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:53.924 20:38:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:53.924 20:38:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:53.924 20:38:37 -- common/autotest_common.sh@889 -- # local i 00:11:53.924 20:38:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:53.924 20:38:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:53.924 20:38:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:53.924 20:38:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:54.183 [ 00:11:54.183 { 00:11:54.183 "name": "BaseBdev1", 00:11:54.183 "aliases": [ 00:11:54.183 "4efcf2ec-27ee-4b85-80d5-c0bc3e277d27" 00:11:54.183 ], 00:11:54.183 "product_name": "Malloc disk", 00:11:54.183 "block_size": 512, 00:11:54.183 "num_blocks": 65536, 00:11:54.183 "uuid": "4efcf2ec-27ee-4b85-80d5-c0bc3e277d27", 00:11:54.183 "assigned_rate_limits": { 00:11:54.183 "rw_ios_per_sec": 0, 00:11:54.183 "rw_mbytes_per_sec": 0, 00:11:54.183 "r_mbytes_per_sec": 0, 00:11:54.183 "w_mbytes_per_sec": 0 00:11:54.183 }, 00:11:54.183 "claimed": true, 00:11:54.183 "claim_type": "exclusive_write", 00:11:54.183 "zoned": false, 00:11:54.183 "supported_io_types": { 00:11:54.183 "read": true, 00:11:54.183 "write": true, 00:11:54.183 "unmap": true, 00:11:54.183 "write_zeroes": true, 00:11:54.183 "flush": true, 00:11:54.183 "reset": true, 00:11:54.183 "compare": false, 00:11:54.183 "compare_and_write": false, 00:11:54.183 "abort": true, 00:11:54.183 "nvme_admin": false, 00:11:54.183 "nvme_io": false 00:11:54.183 }, 00:11:54.183 "memory_domains": [ 00:11:54.183 { 00:11:54.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.183 "dma_device_type": 2 00:11:54.183 } 00:11:54.183 ], 00:11:54.183 "driver_specific": {} 00:11:54.183 } 00:11:54.183 ] 00:11:54.183 20:38:37 -- common/autotest_common.sh@895 -- # return 0 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.183 20:38:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.443 20:38:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:54.443 "name": "Existed_Raid", 00:11:54.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.443 "strip_size_kb": 64, 00:11:54.443 "state": "configuring", 00:11:54.443 "raid_level": "concat", 00:11:54.443 "superblock": false, 00:11:54.443 "num_base_bdevs": 2, 00:11:54.443 "num_base_bdevs_discovered": 1, 00:11:54.443 "num_base_bdevs_operational": 2, 00:11:54.443 "base_bdevs_list": [ 00:11:54.443 { 00:11:54.443 "name": "BaseBdev1", 00:11:54.443 "uuid": "4efcf2ec-27ee-4b85-80d5-c0bc3e277d27", 00:11:54.443 "is_configured": true, 00:11:54.443 "data_offset": 0, 00:11:54.443 "data_size": 65536 00:11:54.443 }, 00:11:54.443 { 00:11:54.443 "name": "BaseBdev2", 00:11:54.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.443 "is_configured": false, 00:11:54.443 "data_offset": 0, 00:11:54.443 "data_size": 0 00:11:54.443 } 00:11:54.443 ] 00:11:54.443 }' 00:11:54.443 20:38:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:54.443 20:38:37 -- common/autotest_common.sh@10 -- # set +x 00:11:55.011 20:38:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:55.011 [2024-04-15 20:38:38.364200] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.011 [2024-04-15 20:38:38.364236] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:11:55.011 20:38:38 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:11:55.011 20:38:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:55.270 [2024-04-15 20:38:38.524028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.270 [2024-04-15 20:38:38.525400] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.270 [2024-04-15 20:38:38.525453] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:55.270 "name": "Existed_Raid", 00:11:55.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.270 "strip_size_kb": 64, 00:11:55.270 "state": "configuring", 00:11:55.270 "raid_level": "concat", 00:11:55.270 "superblock": false, 00:11:55.270 "num_base_bdevs": 2, 00:11:55.270 "num_base_bdevs_discovered": 1, 00:11:55.270 "num_base_bdevs_operational": 2, 00:11:55.270 "base_bdevs_list": [ 00:11:55.270 { 00:11:55.270 "name": "BaseBdev1", 00:11:55.270 "uuid": "4efcf2ec-27ee-4b85-80d5-c0bc3e277d27", 00:11:55.270 "is_configured": true, 00:11:55.270 "data_offset": 0, 00:11:55.270 "data_size": 65536 00:11:55.270 }, 00:11:55.270 { 00:11:55.270 "name": "BaseBdev2", 00:11:55.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.270 "is_configured": false, 00:11:55.270 "data_offset": 0, 00:11:55.270 "data_size": 0 00:11:55.270 } 00:11:55.270 ] 00:11:55.270 }' 00:11:55.270 20:38:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:55.270 20:38:38 -- common/autotest_common.sh@10 -- # set +x 00:11:55.838 20:38:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:56.097 [2024-04-15 20:38:39.424061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.097 [2024-04-15 20:38:39.424094] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027f80 00:11:56.097 [2024-04-15 20:38:39.424102] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:56.097 [2024-04-15 20:38:39.424189] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:11:56.097 [2024-04-15 20:38:39.424361] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027f80 00:11:56.097 [2024-04-15 20:38:39.424370] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027f80 00:11:56.097 [2024-04-15 20:38:39.424525] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.097 BaseBdev2 00:11:56.097 20:38:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:56.097 20:38:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:56.097 20:38:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:56.097 20:38:39 -- common/autotest_common.sh@889 -- # local i 00:11:56.097 20:38:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:56.097 20:38:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:56.097 20:38:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:56.097 20:38:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:56.357 [ 00:11:56.357 { 00:11:56.357 "name": "BaseBdev2", 00:11:56.357 "aliases": [ 00:11:56.357 "6e325966-3985-4ed4-893f-98c20d98045d" 00:11:56.357 ], 00:11:56.357 "product_name": "Malloc disk", 00:11:56.357 "block_size": 512, 00:11:56.357 "num_blocks": 65536, 00:11:56.357 "uuid": "6e325966-3985-4ed4-893f-98c20d98045d", 00:11:56.357 "assigned_rate_limits": { 00:11:56.357 "rw_ios_per_sec": 0, 00:11:56.357 "rw_mbytes_per_sec": 0, 00:11:56.357 "r_mbytes_per_sec": 0, 00:11:56.357 "w_mbytes_per_sec": 0 00:11:56.357 }, 00:11:56.357 "claimed": true, 00:11:56.357 "claim_type": "exclusive_write", 00:11:56.357 "zoned": false, 00:11:56.357 "supported_io_types": { 00:11:56.357 "read": true, 00:11:56.357 "write": true, 00:11:56.357 "unmap": true, 00:11:56.357 "write_zeroes": true, 00:11:56.357 "flush": true, 00:11:56.357 "reset": true, 00:11:56.357 "compare": false, 00:11:56.357 "compare_and_write": false, 00:11:56.357 "abort": true, 00:11:56.357 "nvme_admin": false, 00:11:56.357 "nvme_io": false 00:11:56.357 }, 00:11:56.357 "memory_domains": [ 00:11:56.357 { 00:11:56.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.357 "dma_device_type": 2 00:11:56.357 } 00:11:56.357 ], 00:11:56.357 "driver_specific": {} 00:11:56.357 } 00:11:56.357 ] 00:11:56.357 20:38:39 -- common/autotest_common.sh@895 -- # return 0 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.357 20:38:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.616 20:38:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:56.616 "name": "Existed_Raid", 00:11:56.616 "uuid": "b3557fad-f0f7-4e50-8aa6-62df6553bcf2", 00:11:56.616 "strip_size_kb": 64, 00:11:56.616 "state": "online", 00:11:56.616 "raid_level": "concat", 00:11:56.616 "superblock": false, 00:11:56.616 "num_base_bdevs": 2, 00:11:56.616 "num_base_bdevs_discovered": 2, 00:11:56.616 "num_base_bdevs_operational": 2, 00:11:56.616 "base_bdevs_list": [ 00:11:56.616 { 00:11:56.616 "name": "BaseBdev1", 00:11:56.616 "uuid": "4efcf2ec-27ee-4b85-80d5-c0bc3e277d27", 00:11:56.616 "is_configured": true, 00:11:56.616 "data_offset": 0, 00:11:56.616 "data_size": 65536 00:11:56.616 }, 00:11:56.616 { 00:11:56.616 "name": "BaseBdev2", 00:11:56.616 "uuid": "6e325966-3985-4ed4-893f-98c20d98045d", 00:11:56.616 "is_configured": true, 00:11:56.616 "data_offset": 0, 00:11:56.616 "data_size": 65536 00:11:56.616 } 00:11:56.616 ] 00:11:56.616 }' 00:11:56.616 20:38:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:56.616 20:38:39 -- common/autotest_common.sh@10 -- # set +x 00:11:57.182 20:38:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:57.182 [2024-04-15 20:38:40.586339] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.182 [2024-04-15 20:38:40.586371] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.182 [2024-04-15 20:38:40.586411] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.441 20:38:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:57.441 "name": "Existed_Raid", 00:11:57.441 "uuid": "b3557fad-f0f7-4e50-8aa6-62df6553bcf2", 00:11:57.441 "strip_size_kb": 64, 00:11:57.441 "state": "offline", 00:11:57.441 "raid_level": "concat", 00:11:57.441 "superblock": false, 00:11:57.441 "num_base_bdevs": 2, 00:11:57.441 "num_base_bdevs_discovered": 1, 00:11:57.442 "num_base_bdevs_operational": 1, 00:11:57.442 "base_bdevs_list": [ 00:11:57.442 { 00:11:57.442 "name": null, 00:11:57.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.442 "is_configured": false, 00:11:57.442 "data_offset": 0, 00:11:57.442 "data_size": 65536 00:11:57.442 }, 00:11:57.442 { 00:11:57.442 "name": "BaseBdev2", 00:11:57.442 "uuid": "6e325966-3985-4ed4-893f-98c20d98045d", 00:11:57.442 "is_configured": true, 00:11:57.442 "data_offset": 0, 00:11:57.442 "data_size": 65536 00:11:57.442 } 00:11:57.442 ] 00:11:57.442 }' 00:11:57.442 20:38:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:57.442 20:38:40 -- common/autotest_common.sh@10 -- # set +x 00:11:58.009 20:38:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:58.009 20:38:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:58.009 20:38:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:58.009 20:38:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.268 20:38:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:58.268 20:38:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.268 20:38:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:58.268 [2024-04-15 20:38:41.685898] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.268 [2024-04-15 20:38:41.685942] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027f80 name Existed_Raid, state offline 00:11:58.527 20:38:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:58.527 20:38:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:58.527 20:38:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:58.527 20:38:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.527 20:38:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:58.527 20:38:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:58.527 20:38:41 -- bdev/bdev_raid.sh@287 -- # killprocess 47144 00:11:58.527 20:38:41 -- common/autotest_common.sh@926 -- # '[' -z 47144 ']' 00:11:58.527 20:38:41 -- common/autotest_common.sh@930 -- # kill -0 47144 00:11:58.527 20:38:41 -- common/autotest_common.sh@931 -- # uname 00:11:58.527 20:38:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:58.527 20:38:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 47144 00:11:58.527 killing process with pid 47144 00:11:58.527 20:38:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:58.527 20:38:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:58.527 20:38:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47144' 00:11:58.527 20:38:42 -- common/autotest_common.sh@945 -- # kill 47144 00:11:58.527 20:38:42 -- common/autotest_common.sh@950 -- # wait 47144 00:11:58.527 [2024-04-15 20:38:42.011313] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.527 [2024-04-15 20:38:42.011407] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:59.906 ************************************ 00:11:59.906 END TEST raid_state_function_test 00:11:59.906 ************************************ 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:59.906 00:11:59.906 real 0m9.154s 00:11:59.906 user 0m14.982s 00:11:59.906 sys 0m1.132s 00:11:59.906 20:38:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.906 20:38:43 -- common/autotest_common.sh@10 -- # set +x 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:11:59.906 20:38:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:59.906 20:38:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:59.906 20:38:43 -- common/autotest_common.sh@10 -- # set +x 00:11:59.906 ************************************ 00:11:59.906 START TEST raid_state_function_test_sb 00:11:59.906 ************************************ 00:11:59.906 20:38:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=47463 00:11:59.906 Process raid pid: 47463 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 47463' 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 47463 /var/tmp/spdk-raid.sock 00:11:59.906 20:38:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:59.906 20:38:43 -- common/autotest_common.sh@819 -- # '[' -z 47463 ']' 00:11:59.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:59.906 20:38:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:59.906 20:38:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:59.906 20:38:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:59.906 20:38:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:59.906 20:38:43 -- common/autotest_common.sh@10 -- # set +x 00:12:00.165 [2024-04-15 20:38:43.482693] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:12:00.165 [2024-04-15 20:38:43.482828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.165 [2024-04-15 20:38:43.635801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.488 [2024-04-15 20:38:43.822044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.747 [2024-04-15 20:38:44.015700] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.747 20:38:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:00.747 20:38:44 -- common/autotest_common.sh@852 -- # return 0 00:12:00.747 20:38:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:01.006 [2024-04-15 20:38:44.325881] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.006 [2024-04-15 20:38:44.325945] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.006 [2024-04-15 20:38:44.325956] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.006 [2024-04-15 20:38:44.325972] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.006 20:38:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.265 20:38:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:01.265 "name": "Existed_Raid", 00:12:01.265 "uuid": "5d33faf7-b68b-4885-8ad9-f5abd10896e6", 00:12:01.265 "strip_size_kb": 64, 00:12:01.265 "state": "configuring", 00:12:01.265 "raid_level": "concat", 00:12:01.265 "superblock": true, 00:12:01.265 "num_base_bdevs": 2, 00:12:01.265 "num_base_bdevs_discovered": 0, 00:12:01.265 "num_base_bdevs_operational": 2, 00:12:01.265 "base_bdevs_list": [ 00:12:01.265 { 00:12:01.265 "name": "BaseBdev1", 00:12:01.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.265 "is_configured": false, 00:12:01.265 "data_offset": 0, 00:12:01.265 "data_size": 0 00:12:01.265 }, 00:12:01.265 { 00:12:01.265 "name": "BaseBdev2", 00:12:01.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.265 "is_configured": false, 00:12:01.265 "data_offset": 0, 00:12:01.265 "data_size": 0 00:12:01.265 } 00:12:01.265 ] 00:12:01.265 }' 00:12:01.265 20:38:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:01.265 20:38:44 -- common/autotest_common.sh@10 -- # set +x 00:12:01.833 20:38:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:01.833 [2024-04-15 20:38:45.164762] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.833 [2024-04-15 20:38:45.164816] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:12:01.833 20:38:45 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:01.833 [2024-04-15 20:38:45.308631] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.833 [2024-04-15 20:38:45.308712] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.833 [2024-04-15 20:38:45.308722] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.833 [2024-04-15 20:38:45.308742] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.833 20:38:45 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.092 [2024-04-15 20:38:45.494284] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.092 BaseBdev1 00:12:02.092 20:38:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:02.092 20:38:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:02.092 20:38:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:02.092 20:38:45 -- common/autotest_common.sh@889 -- # local i 00:12:02.092 20:38:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:02.092 20:38:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:02.092 20:38:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:02.350 20:38:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.350 [ 00:12:02.350 { 00:12:02.350 "name": "BaseBdev1", 00:12:02.350 "aliases": [ 00:12:02.350 "55a14546-fba2-4796-9b66-07131ac78c14" 00:12:02.350 ], 00:12:02.350 "product_name": "Malloc disk", 00:12:02.350 "block_size": 512, 00:12:02.350 "num_blocks": 65536, 00:12:02.350 "uuid": "55a14546-fba2-4796-9b66-07131ac78c14", 00:12:02.350 "assigned_rate_limits": { 00:12:02.350 "rw_ios_per_sec": 0, 00:12:02.350 "rw_mbytes_per_sec": 0, 00:12:02.350 "r_mbytes_per_sec": 0, 00:12:02.350 "w_mbytes_per_sec": 0 00:12:02.350 }, 00:12:02.350 "claimed": true, 00:12:02.351 "claim_type": "exclusive_write", 00:12:02.351 "zoned": false, 00:12:02.351 "supported_io_types": { 00:12:02.351 "read": true, 00:12:02.351 "write": true, 00:12:02.351 "unmap": true, 00:12:02.351 "write_zeroes": true, 00:12:02.351 "flush": true, 00:12:02.351 "reset": true, 00:12:02.351 "compare": false, 00:12:02.351 "compare_and_write": false, 00:12:02.351 "abort": true, 00:12:02.351 "nvme_admin": false, 00:12:02.351 "nvme_io": false 00:12:02.351 }, 00:12:02.351 "memory_domains": [ 00:12:02.351 { 00:12:02.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.351 "dma_device_type": 2 00:12:02.351 } 00:12:02.351 ], 00:12:02.351 "driver_specific": {} 00:12:02.351 } 00:12:02.351 ] 00:12:02.351 20:38:45 -- common/autotest_common.sh@895 -- # return 0 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.351 20:38:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.611 20:38:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:02.611 "name": "Existed_Raid", 00:12:02.611 "uuid": "0a7d151d-f9c0-40ab-8503-cebc9876128e", 00:12:02.611 "strip_size_kb": 64, 00:12:02.611 "state": "configuring", 00:12:02.611 "raid_level": "concat", 00:12:02.611 "superblock": true, 00:12:02.611 "num_base_bdevs": 2, 00:12:02.611 "num_base_bdevs_discovered": 1, 00:12:02.611 "num_base_bdevs_operational": 2, 00:12:02.611 "base_bdevs_list": [ 00:12:02.611 { 00:12:02.611 "name": "BaseBdev1", 00:12:02.611 "uuid": "55a14546-fba2-4796-9b66-07131ac78c14", 00:12:02.611 "is_configured": true, 00:12:02.611 "data_offset": 2048, 00:12:02.611 "data_size": 63488 00:12:02.611 }, 00:12:02.611 { 00:12:02.611 "name": "BaseBdev2", 00:12:02.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.611 "is_configured": false, 00:12:02.611 "data_offset": 0, 00:12:02.611 "data_size": 0 00:12:02.612 } 00:12:02.612 ] 00:12:02.612 }' 00:12:02.612 20:38:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:02.612 20:38:45 -- common/autotest_common.sh@10 -- # set +x 00:12:03.182 20:38:46 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:03.182 [2024-04-15 20:38:46.588606] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.182 [2024-04-15 20:38:46.588821] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:12:03.182 20:38:46 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:12:03.182 20:38:46 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:03.441 20:38:46 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:03.699 BaseBdev1 00:12:03.699 20:38:47 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:12:03.699 20:38:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:03.699 20:38:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:03.699 20:38:47 -- common/autotest_common.sh@889 -- # local i 00:12:03.699 20:38:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:03.699 20:38:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:03.699 20:38:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:03.958 20:38:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:03.958 [ 00:12:03.958 { 00:12:03.958 "name": "BaseBdev1", 00:12:03.958 "aliases": [ 00:12:03.958 "52c5bc1f-1361-479d-8fd6-6e9c2d57118f" 00:12:03.958 ], 00:12:03.958 "product_name": "Malloc disk", 00:12:03.958 "block_size": 512, 00:12:03.958 "num_blocks": 65536, 00:12:03.958 "uuid": "52c5bc1f-1361-479d-8fd6-6e9c2d57118f", 00:12:03.958 "assigned_rate_limits": { 00:12:03.958 "rw_ios_per_sec": 0, 00:12:03.958 "rw_mbytes_per_sec": 0, 00:12:03.958 "r_mbytes_per_sec": 0, 00:12:03.958 "w_mbytes_per_sec": 0 00:12:03.958 }, 00:12:03.958 "claimed": false, 00:12:03.958 "zoned": false, 00:12:03.958 "supported_io_types": { 00:12:03.958 "read": true, 00:12:03.958 "write": true, 00:12:03.958 "unmap": true, 00:12:03.958 "write_zeroes": true, 00:12:03.958 "flush": true, 00:12:03.958 "reset": true, 00:12:03.958 "compare": false, 00:12:03.958 "compare_and_write": false, 00:12:03.958 "abort": true, 00:12:03.958 "nvme_admin": false, 00:12:03.958 "nvme_io": false 00:12:03.958 }, 00:12:03.958 "memory_domains": [ 00:12:03.958 { 00:12:03.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.958 "dma_device_type": 2 00:12:03.958 } 00:12:03.958 ], 00:12:03.958 "driver_specific": {} 00:12:03.958 } 00:12:03.958 ] 00:12:03.958 20:38:47 -- common/autotest_common.sh@895 -- # return 0 00:12:03.958 20:38:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:04.217 [2024-04-15 20:38:47.502064] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.217 [2024-04-15 20:38:47.503433] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:04.217 [2024-04-15 20:38:47.503491] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:04.217 "name": "Existed_Raid", 00:12:04.217 "uuid": "ae4a8f9c-d225-431e-90f7-0f98885b3a9e", 00:12:04.217 "strip_size_kb": 64, 00:12:04.217 "state": "configuring", 00:12:04.217 "raid_level": "concat", 00:12:04.217 "superblock": true, 00:12:04.217 "num_base_bdevs": 2, 00:12:04.217 "num_base_bdevs_discovered": 1, 00:12:04.217 "num_base_bdevs_operational": 2, 00:12:04.217 "base_bdevs_list": [ 00:12:04.217 { 00:12:04.217 "name": "BaseBdev1", 00:12:04.217 "uuid": "52c5bc1f-1361-479d-8fd6-6e9c2d57118f", 00:12:04.217 "is_configured": true, 00:12:04.217 "data_offset": 2048, 00:12:04.217 "data_size": 63488 00:12:04.217 }, 00:12:04.217 { 00:12:04.217 "name": "BaseBdev2", 00:12:04.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.217 "is_configured": false, 00:12:04.217 "data_offset": 0, 00:12:04.217 "data_size": 0 00:12:04.217 } 00:12:04.217 ] 00:12:04.217 }' 00:12:04.217 20:38:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:04.217 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:12:04.783 20:38:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.042 [2024-04-15 20:38:48.406169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.042 [2024-04-15 20:38:48.406294] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:12:05.042 [2024-04-15 20:38:48.406305] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:05.042 [2024-04-15 20:38:48.406385] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:12:05.042 [2024-04-15 20:38:48.406552] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:12:05.042 [2024-04-15 20:38:48.406562] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:12:05.042 BaseBdev2 00:12:05.042 [2024-04-15 20:38:48.406863] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.042 20:38:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:05.042 20:38:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:05.042 20:38:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:05.042 20:38:48 -- common/autotest_common.sh@889 -- # local i 00:12:05.042 20:38:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:05.042 20:38:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:05.042 20:38:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:05.301 20:38:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.301 [ 00:12:05.301 { 00:12:05.301 "name": "BaseBdev2", 00:12:05.301 "aliases": [ 00:12:05.301 "f6aadb95-c104-4d6f-a440-9071b7552822" 00:12:05.301 ], 00:12:05.301 "product_name": "Malloc disk", 00:12:05.301 "block_size": 512, 00:12:05.301 "num_blocks": 65536, 00:12:05.301 "uuid": "f6aadb95-c104-4d6f-a440-9071b7552822", 00:12:05.301 "assigned_rate_limits": { 00:12:05.301 "rw_ios_per_sec": 0, 00:12:05.301 "rw_mbytes_per_sec": 0, 00:12:05.301 "r_mbytes_per_sec": 0, 00:12:05.301 "w_mbytes_per_sec": 0 00:12:05.301 }, 00:12:05.301 "claimed": true, 00:12:05.301 "claim_type": "exclusive_write", 00:12:05.301 "zoned": false, 00:12:05.301 "supported_io_types": { 00:12:05.301 "read": true, 00:12:05.301 "write": true, 00:12:05.301 "unmap": true, 00:12:05.301 "write_zeroes": true, 00:12:05.301 "flush": true, 00:12:05.301 "reset": true, 00:12:05.301 "compare": false, 00:12:05.301 "compare_and_write": false, 00:12:05.301 "abort": true, 00:12:05.301 "nvme_admin": false, 00:12:05.301 "nvme_io": false 00:12:05.301 }, 00:12:05.301 "memory_domains": [ 00:12:05.301 { 00:12:05.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.301 "dma_device_type": 2 00:12:05.301 } 00:12:05.301 ], 00:12:05.301 "driver_specific": {} 00:12:05.301 } 00:12:05.301 ] 00:12:05.301 20:38:48 -- common/autotest_common.sh@895 -- # return 0 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.301 20:38:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.561 20:38:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:05.561 "name": "Existed_Raid", 00:12:05.561 "uuid": "ae4a8f9c-d225-431e-90f7-0f98885b3a9e", 00:12:05.561 "strip_size_kb": 64, 00:12:05.561 "state": "online", 00:12:05.561 "raid_level": "concat", 00:12:05.561 "superblock": true, 00:12:05.561 "num_base_bdevs": 2, 00:12:05.561 "num_base_bdevs_discovered": 2, 00:12:05.561 "num_base_bdevs_operational": 2, 00:12:05.561 "base_bdevs_list": [ 00:12:05.561 { 00:12:05.561 "name": "BaseBdev1", 00:12:05.561 "uuid": "52c5bc1f-1361-479d-8fd6-6e9c2d57118f", 00:12:05.561 "is_configured": true, 00:12:05.561 "data_offset": 2048, 00:12:05.561 "data_size": 63488 00:12:05.561 }, 00:12:05.561 { 00:12:05.561 "name": "BaseBdev2", 00:12:05.561 "uuid": "f6aadb95-c104-4d6f-a440-9071b7552822", 00:12:05.561 "is_configured": true, 00:12:05.561 "data_offset": 2048, 00:12:05.561 "data_size": 63488 00:12:05.561 } 00:12:05.561 ] 00:12:05.561 }' 00:12:05.561 20:38:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:05.561 20:38:48 -- common/autotest_common.sh@10 -- # set +x 00:12:06.147 20:38:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:06.147 [2024-04-15 20:38:49.537008] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.147 [2024-04-15 20:38:49.537036] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.147 [2024-04-15 20:38:49.537071] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.147 20:38:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:06.147 20:38:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:06.406 "name": "Existed_Raid", 00:12:06.406 "uuid": "ae4a8f9c-d225-431e-90f7-0f98885b3a9e", 00:12:06.406 "strip_size_kb": 64, 00:12:06.406 "state": "offline", 00:12:06.406 "raid_level": "concat", 00:12:06.406 "superblock": true, 00:12:06.406 "num_base_bdevs": 2, 00:12:06.406 "num_base_bdevs_discovered": 1, 00:12:06.406 "num_base_bdevs_operational": 1, 00:12:06.406 "base_bdevs_list": [ 00:12:06.406 { 00:12:06.406 "name": null, 00:12:06.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.406 "is_configured": false, 00:12:06.406 "data_offset": 2048, 00:12:06.406 "data_size": 63488 00:12:06.406 }, 00:12:06.406 { 00:12:06.406 "name": "BaseBdev2", 00:12:06.406 "uuid": "f6aadb95-c104-4d6f-a440-9071b7552822", 00:12:06.406 "is_configured": true, 00:12:06.406 "data_offset": 2048, 00:12:06.406 "data_size": 63488 00:12:06.406 } 00:12:06.406 ] 00:12:06.406 }' 00:12:06.406 20:38:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:06.406 20:38:49 -- common/autotest_common.sh@10 -- # set +x 00:12:06.973 20:38:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:06.973 20:38:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:06.973 20:38:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.973 20:38:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:06.973 20:38:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:06.973 20:38:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.973 20:38:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:07.232 [2024-04-15 20:38:50.628991] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.232 [2024-04-15 20:38:50.629035] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:12:07.492 20:38:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:07.492 20:38:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:07.492 20:38:50 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.492 20:38:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:07.492 20:38:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:07.492 20:38:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:07.492 20:38:50 -- bdev/bdev_raid.sh@287 -- # killprocess 47463 00:12:07.492 20:38:50 -- common/autotest_common.sh@926 -- # '[' -z 47463 ']' 00:12:07.492 20:38:50 -- common/autotest_common.sh@930 -- # kill -0 47463 00:12:07.492 20:38:50 -- common/autotest_common.sh@931 -- # uname 00:12:07.492 20:38:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:07.492 20:38:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 47463 00:12:07.492 killing process with pid 47463 00:12:07.492 20:38:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:07.492 20:38:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:07.492 20:38:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47463' 00:12:07.492 20:38:50 -- common/autotest_common.sh@945 -- # kill 47463 00:12:07.492 20:38:50 -- common/autotest_common.sh@950 -- # wait 47463 00:12:07.492 [2024-04-15 20:38:50.934729] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.492 [2024-04-15 20:38:50.934823] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:08.875 00:12:08.875 real 0m8.835s 00:12:08.875 user 0m14.690s 00:12:08.875 sys 0m1.179s 00:12:08.875 20:38:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.875 20:38:52 -- common/autotest_common.sh@10 -- # set +x 00:12:08.875 ************************************ 00:12:08.875 END TEST raid_state_function_test_sb 00:12:08.875 ************************************ 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:12:08.875 20:38:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:08.875 20:38:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:08.875 20:38:52 -- common/autotest_common.sh@10 -- # set +x 00:12:08.875 ************************************ 00:12:08.875 START TEST raid_superblock_test 00:12:08.875 ************************************ 00:12:08.875 20:38:52 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:12:08.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@357 -- # raid_pid=47771 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@358 -- # waitforlisten 47771 /var/tmp/spdk-raid.sock 00:12:08.875 20:38:52 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:08.875 20:38:52 -- common/autotest_common.sh@819 -- # '[' -z 47771 ']' 00:12:08.875 20:38:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:08.875 20:38:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:08.875 20:38:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:08.876 20:38:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:08.876 20:38:52 -- common/autotest_common.sh@10 -- # set +x 00:12:08.876 [2024-04-15 20:38:52.367669] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:12:08.876 [2024-04-15 20:38:52.367813] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid47771 ] 00:12:09.134 [2024-04-15 20:38:52.520965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.393 [2024-04-15 20:38:52.711927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.652 [2024-04-15 20:38:52.897333] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.652 20:38:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:09.652 20:38:53 -- common/autotest_common.sh@852 -- # return 0 00:12:09.652 20:38:53 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:12:09.652 20:38:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:09.652 20:38:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:12:09.652 20:38:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:12:09.652 20:38:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:09.652 20:38:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:09.652 20:38:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:09.652 20:38:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:09.652 20:38:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:09.911 malloc1 00:12:09.911 20:38:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:10.170 [2024-04-15 20:38:53.433407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:10.170 [2024-04-15 20:38:53.433490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.170 [2024-04-15 20:38:53.433534] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:12:10.170 [2024-04-15 20:38:53.433572] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.170 [2024-04-15 20:38:53.435183] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.170 [2024-04-15 20:38:53.435222] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:10.170 pt1 00:12:10.170 20:38:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:10.170 20:38:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:10.170 20:38:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:12:10.170 20:38:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:12:10.170 20:38:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:10.170 20:38:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:10.170 20:38:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:10.170 20:38:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:10.170 20:38:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:10.170 malloc2 00:12:10.429 20:38:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:10.429 [2024-04-15 20:38:53.833673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:10.429 [2024-04-15 20:38:53.833757] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.429 [2024-04-15 20:38:53.833797] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:12:10.429 [2024-04-15 20:38:53.833835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.429 [2024-04-15 20:38:53.835443] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.429 [2024-04-15 20:38:53.835489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:10.429 pt2 00:12:10.429 20:38:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:10.429 20:38:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:10.429 20:38:53 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:12:10.689 [2024-04-15 20:38:54.021501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:10.689 [2024-04-15 20:38:54.022900] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.689 [2024-04-15 20:38:54.023012] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002a380 00:12:10.689 [2024-04-15 20:38:54.023023] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:10.689 [2024-04-15 20:38:54.023127] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:12:10.689 [2024-04-15 20:38:54.023329] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002a380 00:12:10.689 [2024-04-15 20:38:54.023338] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002a380 00:12:10.689 [2024-04-15 20:38:54.023424] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.689 20:38:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.948 20:38:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:10.948 "name": "raid_bdev1", 00:12:10.948 "uuid": "42ce6371-0906-44e7-8b74-ad00daabae5d", 00:12:10.948 "strip_size_kb": 64, 00:12:10.948 "state": "online", 00:12:10.948 "raid_level": "concat", 00:12:10.948 "superblock": true, 00:12:10.948 "num_base_bdevs": 2, 00:12:10.948 "num_base_bdevs_discovered": 2, 00:12:10.948 "num_base_bdevs_operational": 2, 00:12:10.948 "base_bdevs_list": [ 00:12:10.948 { 00:12:10.948 "name": "pt1", 00:12:10.948 "uuid": "bf80bd84-070b-5449-a51e-3caec75f8751", 00:12:10.948 "is_configured": true, 00:12:10.948 "data_offset": 2048, 00:12:10.948 "data_size": 63488 00:12:10.948 }, 00:12:10.948 { 00:12:10.948 "name": "pt2", 00:12:10.948 "uuid": "beddb3b0-7a29-54c6-b9a7-2d396d6135fa", 00:12:10.948 "is_configured": true, 00:12:10.948 "data_offset": 2048, 00:12:10.948 "data_size": 63488 00:12:10.948 } 00:12:10.948 ] 00:12:10.948 }' 00:12:10.948 20:38:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:10.948 20:38:54 -- common/autotest_common.sh@10 -- # set +x 00:12:11.516 20:38:54 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:12:11.516 20:38:54 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:11.516 [2024-04-15 20:38:54.932167] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.516 20:38:54 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=42ce6371-0906-44e7-8b74-ad00daabae5d 00:12:11.516 20:38:54 -- bdev/bdev_raid.sh@380 -- # '[' -z 42ce6371-0906-44e7-8b74-ad00daabae5d ']' 00:12:11.516 20:38:54 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:11.774 [2024-04-15 20:38:55.099766] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:11.774 [2024-04-15 20:38:55.099802] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.774 [2024-04-15 20:38:55.099868] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.774 [2024-04-15 20:38:55.099896] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.774 [2024-04-15 20:38:55.099904] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a380 name raid_bdev1, state offline 00:12:11.774 20:38:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:12:11.774 20:38:55 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.032 20:38:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:12:12.032 20:38:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:12:12.032 20:38:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:12.032 20:38:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:12.032 20:38:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:12.032 20:38:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:12.291 20:38:55 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:12.291 20:38:55 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:12.291 20:38:55 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:12:12.291 20:38:55 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:12.291 20:38:55 -- common/autotest_common.sh@640 -- # local es=0 00:12:12.291 20:38:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:12.291 20:38:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:12.291 20:38:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:12.291 20:38:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:12.291 20:38:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:12.291 20:38:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:12.291 20:38:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:12.291 20:38:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:12.291 20:38:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:12.291 20:38:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:12.620 [2024-04-15 20:38:55.958481] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:12.620 [2024-04-15 20:38:55.959848] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:12.620 [2024-04-15 20:38:55.959892] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:12:12.620 [2024-04-15 20:38:55.959946] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:12:12.620 [2024-04-15 20:38:55.959973] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.620 [2024-04-15 20:38:55.959983] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a980 name raid_bdev1, state configuring 00:12:12.620 request: 00:12:12.620 { 00:12:12.620 "name": "raid_bdev1", 00:12:12.620 "raid_level": "concat", 00:12:12.620 "base_bdevs": [ 00:12:12.620 "malloc1", 00:12:12.620 "malloc2" 00:12:12.620 ], 00:12:12.620 "superblock": false, 00:12:12.620 "strip_size_kb": 64, 00:12:12.620 "method": "bdev_raid_create", 00:12:12.620 "req_id": 1 00:12:12.620 } 00:12:12.620 Got JSON-RPC error response 00:12:12.620 response: 00:12:12.620 { 00:12:12.620 "code": -17, 00:12:12.620 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:12.620 } 00:12:12.620 20:38:55 -- common/autotest_common.sh@643 -- # es=1 00:12:12.620 20:38:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:12.620 20:38:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:12.620 20:38:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:12.620 20:38:55 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:12:12.620 20:38:55 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:12.896 [2024-04-15 20:38:56.305936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:12.896 [2024-04-15 20:38:56.306035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.896 [2024-04-15 20:38:56.306071] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:12:12.896 [2024-04-15 20:38:56.306099] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.896 [2024-04-15 20:38:56.307690] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.896 [2024-04-15 20:38:56.307739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:12.896 [2024-04-15 20:38:56.307814] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:12:12.896 [2024-04-15 20:38:56.307869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:12.896 pt1 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.896 20:38:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.155 20:38:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:13.155 "name": "raid_bdev1", 00:12:13.155 "uuid": "42ce6371-0906-44e7-8b74-ad00daabae5d", 00:12:13.155 "strip_size_kb": 64, 00:12:13.155 "state": "configuring", 00:12:13.155 "raid_level": "concat", 00:12:13.155 "superblock": true, 00:12:13.155 "num_base_bdevs": 2, 00:12:13.155 "num_base_bdevs_discovered": 1, 00:12:13.155 "num_base_bdevs_operational": 2, 00:12:13.155 "base_bdevs_list": [ 00:12:13.155 { 00:12:13.155 "name": "pt1", 00:12:13.155 "uuid": "bf80bd84-070b-5449-a51e-3caec75f8751", 00:12:13.155 "is_configured": true, 00:12:13.155 "data_offset": 2048, 00:12:13.155 "data_size": 63488 00:12:13.155 }, 00:12:13.155 { 00:12:13.155 "name": null, 00:12:13.155 "uuid": "beddb3b0-7a29-54c6-b9a7-2d396d6135fa", 00:12:13.155 "is_configured": false, 00:12:13.155 "data_offset": 2048, 00:12:13.155 "data_size": 63488 00:12:13.155 } 00:12:13.155 ] 00:12:13.155 }' 00:12:13.155 20:38:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:13.155 20:38:56 -- common/autotest_common.sh@10 -- # set +x 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:13.724 [2024-04-15 20:38:57.196599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:13.724 [2024-04-15 20:38:57.196886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.724 [2024-04-15 20:38:57.196946] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d380 00:12:13.724 [2024-04-15 20:38:57.196973] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.724 [2024-04-15 20:38:57.197277] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.724 [2024-04-15 20:38:57.197315] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:13.724 [2024-04-15 20:38:57.197397] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:13.724 [2024-04-15 20:38:57.197417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:13.724 [2024-04-15 20:38:57.197485] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002cd80 00:12:13.724 [2024-04-15 20:38:57.197493] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:13.724 [2024-04-15 20:38:57.197576] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:12:13.724 [2024-04-15 20:38:57.197739] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002cd80 00:12:13.724 [2024-04-15 20:38:57.197749] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002cd80 00:12:13.724 [2024-04-15 20:38:57.197829] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.724 pt2 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.724 20:38:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.984 20:38:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:13.984 "name": "raid_bdev1", 00:12:13.984 "uuid": "42ce6371-0906-44e7-8b74-ad00daabae5d", 00:12:13.984 "strip_size_kb": 64, 00:12:13.984 "state": "online", 00:12:13.984 "raid_level": "concat", 00:12:13.984 "superblock": true, 00:12:13.984 "num_base_bdevs": 2, 00:12:13.984 "num_base_bdevs_discovered": 2, 00:12:13.984 "num_base_bdevs_operational": 2, 00:12:13.984 "base_bdevs_list": [ 00:12:13.984 { 00:12:13.984 "name": "pt1", 00:12:13.984 "uuid": "bf80bd84-070b-5449-a51e-3caec75f8751", 00:12:13.984 "is_configured": true, 00:12:13.984 "data_offset": 2048, 00:12:13.984 "data_size": 63488 00:12:13.984 }, 00:12:13.984 { 00:12:13.984 "name": "pt2", 00:12:13.984 "uuid": "beddb3b0-7a29-54c6-b9a7-2d396d6135fa", 00:12:13.984 "is_configured": true, 00:12:13.984 "data_offset": 2048, 00:12:13.984 "data_size": 63488 00:12:13.984 } 00:12:13.984 ] 00:12:13.984 }' 00:12:13.984 20:38:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:13.984 20:38:57 -- common/autotest_common.sh@10 -- # set +x 00:12:14.553 20:38:57 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:14.553 20:38:57 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:12:14.553 [2024-04-15 20:38:58.043446] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.813 20:38:58 -- bdev/bdev_raid.sh@430 -- # '[' 42ce6371-0906-44e7-8b74-ad00daabae5d '!=' 42ce6371-0906-44e7-8b74-ad00daabae5d ']' 00:12:14.813 20:38:58 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:12:14.813 20:38:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:14.813 20:38:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:14.813 20:38:58 -- bdev/bdev_raid.sh@511 -- # killprocess 47771 00:12:14.813 20:38:58 -- common/autotest_common.sh@926 -- # '[' -z 47771 ']' 00:12:14.813 20:38:58 -- common/autotest_common.sh@930 -- # kill -0 47771 00:12:14.813 20:38:58 -- common/autotest_common.sh@931 -- # uname 00:12:14.813 20:38:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:14.813 20:38:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 47771 00:12:14.813 killing process with pid 47771 00:12:14.813 20:38:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:14.813 20:38:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:14.813 20:38:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 47771' 00:12:14.813 20:38:58 -- common/autotest_common.sh@945 -- # kill 47771 00:12:14.813 [2024-04-15 20:38:58.093976] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.813 20:38:58 -- common/autotest_common.sh@950 -- # wait 47771 00:12:14.813 [2024-04-15 20:38:58.094039] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.813 [2024-04-15 20:38:58.094068] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.813 [2024-04-15 20:38:58.094076] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002cd80 name raid_bdev1, state offline 00:12:14.813 [2024-04-15 20:38:58.271266] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.220 ************************************ 00:12:16.220 END TEST raid_superblock_test 00:12:16.220 ************************************ 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:12:16.220 00:12:16.220 real 0m7.305s 00:12:16.220 user 0m11.846s 00:12:16.220 sys 0m0.950s 00:12:16.220 20:38:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.220 20:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:12:16.220 20:38:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:16.220 20:38:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:16.220 20:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:16.220 ************************************ 00:12:16.220 START TEST raid_state_function_test 00:12:16.220 ************************************ 00:12:16.220 20:38:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:12:16.220 Process raid pid: 48007 00:12:16.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=48007 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48007' 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48007 /var/tmp/spdk-raid.sock 00:12:16.220 20:38:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:16.220 20:38:59 -- common/autotest_common.sh@819 -- # '[' -z 48007 ']' 00:12:16.220 20:38:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:16.220 20:38:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:16.220 20:38:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:16.220 20:38:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:16.220 20:38:59 -- common/autotest_common.sh@10 -- # set +x 00:12:16.502 [2024-04-15 20:38:59.761346] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:12:16.502 [2024-04-15 20:38:59.761496] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.502 [2024-04-15 20:38:59.919520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.760 [2024-04-15 20:39:00.120194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.019 [2024-04-15 20:39:00.325349] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.958 20:39:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:17.958 20:39:01 -- common/autotest_common.sh@852 -- # return 0 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:17.959 [2024-04-15 20:39:01.273029] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:17.959 [2024-04-15 20:39:01.273090] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:17.959 [2024-04-15 20:39:01.273101] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:17.959 [2024-04-15 20:39:01.273117] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.959 20:39:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.217 20:39:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:18.217 "name": "Existed_Raid", 00:12:18.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.218 "strip_size_kb": 0, 00:12:18.218 "state": "configuring", 00:12:18.218 "raid_level": "raid1", 00:12:18.218 "superblock": false, 00:12:18.218 "num_base_bdevs": 2, 00:12:18.218 "num_base_bdevs_discovered": 0, 00:12:18.218 "num_base_bdevs_operational": 2, 00:12:18.218 "base_bdevs_list": [ 00:12:18.218 { 00:12:18.218 "name": "BaseBdev1", 00:12:18.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.218 "is_configured": false, 00:12:18.218 "data_offset": 0, 00:12:18.218 "data_size": 0 00:12:18.218 }, 00:12:18.218 { 00:12:18.218 "name": "BaseBdev2", 00:12:18.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.218 "is_configured": false, 00:12:18.218 "data_offset": 0, 00:12:18.218 "data_size": 0 00:12:18.218 } 00:12:18.218 ] 00:12:18.218 }' 00:12:18.218 20:39:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:18.218 20:39:01 -- common/autotest_common.sh@10 -- # set +x 00:12:18.785 20:39:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:18.785 [2024-04-15 20:39:02.151624] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:18.785 [2024-04-15 20:39:02.151668] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:12:18.785 20:39:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:19.043 [2024-04-15 20:39:02.335376] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:19.043 [2024-04-15 20:39:02.335450] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:19.043 [2024-04-15 20:39:02.335461] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:19.044 [2024-04-15 20:39:02.335482] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:19.044 20:39:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.044 BaseBdev1 00:12:19.044 [2024-04-15 20:39:02.540826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.303 20:39:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:19.303 20:39:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:19.303 20:39:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:19.303 20:39:02 -- common/autotest_common.sh@889 -- # local i 00:12:19.303 20:39:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:19.303 20:39:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:19.303 20:39:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:19.303 20:39:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:19.561 [ 00:12:19.561 { 00:12:19.561 "name": "BaseBdev1", 00:12:19.561 "aliases": [ 00:12:19.561 "b55ffbf6-a171-4b9a-9fdc-8e5c2f0fa89f" 00:12:19.561 ], 00:12:19.561 "product_name": "Malloc disk", 00:12:19.561 "block_size": 512, 00:12:19.561 "num_blocks": 65536, 00:12:19.561 "uuid": "b55ffbf6-a171-4b9a-9fdc-8e5c2f0fa89f", 00:12:19.561 "assigned_rate_limits": { 00:12:19.561 "rw_ios_per_sec": 0, 00:12:19.561 "rw_mbytes_per_sec": 0, 00:12:19.561 "r_mbytes_per_sec": 0, 00:12:19.561 "w_mbytes_per_sec": 0 00:12:19.561 }, 00:12:19.561 "claimed": true, 00:12:19.561 "claim_type": "exclusive_write", 00:12:19.561 "zoned": false, 00:12:19.561 "supported_io_types": { 00:12:19.561 "read": true, 00:12:19.561 "write": true, 00:12:19.561 "unmap": true, 00:12:19.561 "write_zeroes": true, 00:12:19.561 "flush": true, 00:12:19.561 "reset": true, 00:12:19.561 "compare": false, 00:12:19.561 "compare_and_write": false, 00:12:19.561 "abort": true, 00:12:19.561 "nvme_admin": false, 00:12:19.561 "nvme_io": false 00:12:19.561 }, 00:12:19.561 "memory_domains": [ 00:12:19.561 { 00:12:19.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.561 "dma_device_type": 2 00:12:19.562 } 00:12:19.562 ], 00:12:19.562 "driver_specific": {} 00:12:19.562 } 00:12:19.562 ] 00:12:19.562 20:39:02 -- common/autotest_common.sh@895 -- # return 0 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.562 20:39:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.562 20:39:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:19.562 "name": "Existed_Raid", 00:12:19.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.562 "strip_size_kb": 0, 00:12:19.562 "state": "configuring", 00:12:19.562 "raid_level": "raid1", 00:12:19.562 "superblock": false, 00:12:19.562 "num_base_bdevs": 2, 00:12:19.562 "num_base_bdevs_discovered": 1, 00:12:19.562 "num_base_bdevs_operational": 2, 00:12:19.562 "base_bdevs_list": [ 00:12:19.562 { 00:12:19.562 "name": "BaseBdev1", 00:12:19.562 "uuid": "b55ffbf6-a171-4b9a-9fdc-8e5c2f0fa89f", 00:12:19.562 "is_configured": true, 00:12:19.562 "data_offset": 0, 00:12:19.562 "data_size": 65536 00:12:19.562 }, 00:12:19.562 { 00:12:19.562 "name": "BaseBdev2", 00:12:19.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.562 "is_configured": false, 00:12:19.562 "data_offset": 0, 00:12:19.562 "data_size": 0 00:12:19.562 } 00:12:19.562 ] 00:12:19.562 }' 00:12:19.562 20:39:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:19.562 20:39:03 -- common/autotest_common.sh@10 -- # set +x 00:12:20.129 20:39:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:20.388 [2024-04-15 20:39:03.715076] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:20.388 [2024-04-15 20:39:03.715128] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:12:20.388 20:39:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:12:20.388 20:39:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:20.388 [2024-04-15 20:39:03.882911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.388 [2024-04-15 20:39:03.884401] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.388 [2024-04-15 20:39:03.884462] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.648 20:39:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.648 20:39:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:20.648 "name": "Existed_Raid", 00:12:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.648 "strip_size_kb": 0, 00:12:20.648 "state": "configuring", 00:12:20.648 "raid_level": "raid1", 00:12:20.648 "superblock": false, 00:12:20.648 "num_base_bdevs": 2, 00:12:20.648 "num_base_bdevs_discovered": 1, 00:12:20.648 "num_base_bdevs_operational": 2, 00:12:20.648 "base_bdevs_list": [ 00:12:20.648 { 00:12:20.648 "name": "BaseBdev1", 00:12:20.648 "uuid": "b55ffbf6-a171-4b9a-9fdc-8e5c2f0fa89f", 00:12:20.648 "is_configured": true, 00:12:20.648 "data_offset": 0, 00:12:20.648 "data_size": 65536 00:12:20.648 }, 00:12:20.648 { 00:12:20.648 "name": "BaseBdev2", 00:12:20.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.648 "is_configured": false, 00:12:20.648 "data_offset": 0, 00:12:20.648 "data_size": 0 00:12:20.648 } 00:12:20.648 ] 00:12:20.648 }' 00:12:20.648 20:39:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:20.648 20:39:04 -- common/autotest_common.sh@10 -- # set +x 00:12:21.585 20:39:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:21.585 [2024-04-15 20:39:04.969706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:21.585 [2024-04-15 20:39:04.969746] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000027f80 00:12:21.585 [2024-04-15 20:39:04.969763] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:21.585 [2024-04-15 20:39:04.969880] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:12:21.585 [2024-04-15 20:39:04.970096] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000027f80 00:12:21.585 [2024-04-15 20:39:04.970105] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000027f80 00:12:21.585 [2024-04-15 20:39:04.970371] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.585 BaseBdev2 00:12:21.585 20:39:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:21.585 20:39:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:21.585 20:39:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:21.585 20:39:04 -- common/autotest_common.sh@889 -- # local i 00:12:21.585 20:39:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:21.585 20:39:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:21.585 20:39:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:21.844 20:39:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:21.844 [ 00:12:21.844 { 00:12:21.844 "name": "BaseBdev2", 00:12:21.844 "aliases": [ 00:12:21.844 "8806cd0c-3a71-4cf3-9d5b-dc53d31bf016" 00:12:21.844 ], 00:12:21.844 "product_name": "Malloc disk", 00:12:21.844 "block_size": 512, 00:12:21.844 "num_blocks": 65536, 00:12:21.844 "uuid": "8806cd0c-3a71-4cf3-9d5b-dc53d31bf016", 00:12:21.844 "assigned_rate_limits": { 00:12:21.844 "rw_ios_per_sec": 0, 00:12:21.844 "rw_mbytes_per_sec": 0, 00:12:21.844 "r_mbytes_per_sec": 0, 00:12:21.844 "w_mbytes_per_sec": 0 00:12:21.844 }, 00:12:21.844 "claimed": true, 00:12:21.844 "claim_type": "exclusive_write", 00:12:21.844 "zoned": false, 00:12:21.844 "supported_io_types": { 00:12:21.844 "read": true, 00:12:21.844 "write": true, 00:12:21.844 "unmap": true, 00:12:21.844 "write_zeroes": true, 00:12:21.844 "flush": true, 00:12:21.844 "reset": true, 00:12:21.844 "compare": false, 00:12:21.844 "compare_and_write": false, 00:12:21.844 "abort": true, 00:12:21.844 "nvme_admin": false, 00:12:21.844 "nvme_io": false 00:12:21.844 }, 00:12:21.844 "memory_domains": [ 00:12:21.844 { 00:12:21.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:21.844 "dma_device_type": 2 00:12:21.844 } 00:12:21.844 ], 00:12:21.844 "driver_specific": {} 00:12:21.844 } 00:12:21.844 ] 00:12:22.103 20:39:05 -- common/autotest_common.sh@895 -- # return 0 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.103 20:39:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:22.103 "name": "Existed_Raid", 00:12:22.103 "uuid": "611200dd-4bd7-461a-b1ab-99c0afdb3c31", 00:12:22.103 "strip_size_kb": 0, 00:12:22.103 "state": "online", 00:12:22.103 "raid_level": "raid1", 00:12:22.103 "superblock": false, 00:12:22.103 "num_base_bdevs": 2, 00:12:22.103 "num_base_bdevs_discovered": 2, 00:12:22.103 "num_base_bdevs_operational": 2, 00:12:22.103 "base_bdevs_list": [ 00:12:22.103 { 00:12:22.103 "name": "BaseBdev1", 00:12:22.103 "uuid": "b55ffbf6-a171-4b9a-9fdc-8e5c2f0fa89f", 00:12:22.103 "is_configured": true, 00:12:22.103 "data_offset": 0, 00:12:22.103 "data_size": 65536 00:12:22.103 }, 00:12:22.103 { 00:12:22.104 "name": "BaseBdev2", 00:12:22.104 "uuid": "8806cd0c-3a71-4cf3-9d5b-dc53d31bf016", 00:12:22.104 "is_configured": true, 00:12:22.104 "data_offset": 0, 00:12:22.104 "data_size": 65536 00:12:22.104 } 00:12:22.104 ] 00:12:22.104 }' 00:12:22.104 20:39:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:22.104 20:39:05 -- common/autotest_common.sh@10 -- # set +x 00:12:22.671 20:39:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:22.930 [2024-04-15 20:39:06.271887] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@196 -- # return 0 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.930 20:39:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.196 20:39:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:23.196 "name": "Existed_Raid", 00:12:23.196 "uuid": "611200dd-4bd7-461a-b1ab-99c0afdb3c31", 00:12:23.196 "strip_size_kb": 0, 00:12:23.196 "state": "online", 00:12:23.196 "raid_level": "raid1", 00:12:23.196 "superblock": false, 00:12:23.196 "num_base_bdevs": 2, 00:12:23.196 "num_base_bdevs_discovered": 1, 00:12:23.196 "num_base_bdevs_operational": 1, 00:12:23.196 "base_bdevs_list": [ 00:12:23.196 { 00:12:23.196 "name": null, 00:12:23.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.196 "is_configured": false, 00:12:23.196 "data_offset": 0, 00:12:23.196 "data_size": 65536 00:12:23.196 }, 00:12:23.196 { 00:12:23.196 "name": "BaseBdev2", 00:12:23.196 "uuid": "8806cd0c-3a71-4cf3-9d5b-dc53d31bf016", 00:12:23.196 "is_configured": true, 00:12:23.196 "data_offset": 0, 00:12:23.196 "data_size": 65536 00:12:23.196 } 00:12:23.196 ] 00:12:23.196 }' 00:12:23.196 20:39:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:23.196 20:39:06 -- common/autotest_common.sh@10 -- # set +x 00:12:23.765 20:39:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:23.765 20:39:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:23.765 20:39:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.765 20:39:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:24.024 20:39:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:24.024 20:39:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:24.024 20:39:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:24.283 [2024-04-15 20:39:07.528810] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.283 [2024-04-15 20:39:07.528847] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.283 [2024-04-15 20:39:07.528895] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.283 [2024-04-15 20:39:07.623947] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.283 [2024-04-15 20:39:07.623986] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027f80 name Existed_Raid, state offline 00:12:24.283 20:39:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:24.283 20:39:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:24.283 20:39:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.283 20:39:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:24.541 20:39:07 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:24.541 20:39:07 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:24.541 20:39:07 -- bdev/bdev_raid.sh@287 -- # killprocess 48007 00:12:24.541 20:39:07 -- common/autotest_common.sh@926 -- # '[' -z 48007 ']' 00:12:24.541 20:39:07 -- common/autotest_common.sh@930 -- # kill -0 48007 00:12:24.541 20:39:07 -- common/autotest_common.sh@931 -- # uname 00:12:24.541 20:39:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:24.541 20:39:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 48007 00:12:24.541 killing process with pid 48007 00:12:24.541 20:39:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:24.541 20:39:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:24.541 20:39:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48007' 00:12:24.541 20:39:07 -- common/autotest_common.sh@945 -- # kill 48007 00:12:24.541 20:39:07 -- common/autotest_common.sh@950 -- # wait 48007 00:12:24.541 [2024-04-15 20:39:07.879346] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:24.541 [2024-04-15 20:39:07.879487] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:25.918 00:12:25.918 real 0m9.574s 00:12:25.918 user 0m15.727s 00:12:25.918 sys 0m1.204s 00:12:25.918 20:39:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:25.918 20:39:09 -- common/autotest_common.sh@10 -- # set +x 00:12:25.918 ************************************ 00:12:25.918 END TEST raid_state_function_test 00:12:25.918 ************************************ 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:12:25.918 20:39:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:25.918 20:39:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:25.918 20:39:09 -- common/autotest_common.sh@10 -- # set +x 00:12:25.918 ************************************ 00:12:25.918 START TEST raid_state_function_test_sb 00:12:25.918 ************************************ 00:12:25.918 20:39:09 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:12:25.918 Process raid pid: 48332 00:12:25.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=48332 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48332' 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48332 /var/tmp/spdk-raid.sock 00:12:25.918 20:39:09 -- common/autotest_common.sh@819 -- # '[' -z 48332 ']' 00:12:25.918 20:39:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:25.918 20:39:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:25.918 20:39:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:25.918 20:39:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:25.918 20:39:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:25.918 20:39:09 -- common/autotest_common.sh@10 -- # set +x 00:12:25.918 [2024-04-15 20:39:09.408283] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:12:25.918 [2024-04-15 20:39:09.408426] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.177 [2024-04-15 20:39:09.570834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.435 [2024-04-15 20:39:09.760258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.693 [2024-04-15 20:39:09.967277] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.630 20:39:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:27.630 20:39:10 -- common/autotest_common.sh@852 -- # return 0 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:27.630 [2024-04-15 20:39:10.935153] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:27.630 [2024-04-15 20:39:10.935221] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:27.630 [2024-04-15 20:39:10.935232] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:27.630 [2024-04-15 20:39:10.935249] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.630 20:39:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.889 20:39:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:27.889 "name": "Existed_Raid", 00:12:27.889 "uuid": "add16c5e-89a7-42c7-9668-2403295e0edd", 00:12:27.889 "strip_size_kb": 0, 00:12:27.889 "state": "configuring", 00:12:27.889 "raid_level": "raid1", 00:12:27.889 "superblock": true, 00:12:27.889 "num_base_bdevs": 2, 00:12:27.889 "num_base_bdevs_discovered": 0, 00:12:27.889 "num_base_bdevs_operational": 2, 00:12:27.889 "base_bdevs_list": [ 00:12:27.889 { 00:12:27.889 "name": "BaseBdev1", 00:12:27.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.889 "is_configured": false, 00:12:27.889 "data_offset": 0, 00:12:27.889 "data_size": 0 00:12:27.889 }, 00:12:27.889 { 00:12:27.889 "name": "BaseBdev2", 00:12:27.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.889 "is_configured": false, 00:12:27.889 "data_offset": 0, 00:12:27.889 "data_size": 0 00:12:27.889 } 00:12:27.889 ] 00:12:27.889 }' 00:12:27.889 20:39:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:27.889 20:39:11 -- common/autotest_common.sh@10 -- # set +x 00:12:28.457 20:39:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:28.457 [2024-04-15 20:39:11.841624] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.457 [2024-04-15 20:39:11.841673] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:12:28.457 20:39:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:28.716 [2024-04-15 20:39:12.005580] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.716 [2024-04-15 20:39:12.005908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.716 [2024-04-15 20:39:12.005936] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.716 [2024-04-15 20:39:12.005967] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.716 20:39:12 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:28.716 [2024-04-15 20:39:12.198182] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.716 BaseBdev1 00:12:28.973 20:39:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:28.973 20:39:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:28.973 20:39:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:28.973 20:39:12 -- common/autotest_common.sh@889 -- # local i 00:12:28.973 20:39:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:28.973 20:39:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:28.973 20:39:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:28.973 20:39:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:29.232 [ 00:12:29.232 { 00:12:29.232 "name": "BaseBdev1", 00:12:29.232 "aliases": [ 00:12:29.232 "9febf31c-72e6-4112-b605-0851e7eb3a7c" 00:12:29.232 ], 00:12:29.232 "product_name": "Malloc disk", 00:12:29.232 "block_size": 512, 00:12:29.232 "num_blocks": 65536, 00:12:29.232 "uuid": "9febf31c-72e6-4112-b605-0851e7eb3a7c", 00:12:29.232 "assigned_rate_limits": { 00:12:29.232 "rw_ios_per_sec": 0, 00:12:29.232 "rw_mbytes_per_sec": 0, 00:12:29.232 "r_mbytes_per_sec": 0, 00:12:29.232 "w_mbytes_per_sec": 0 00:12:29.232 }, 00:12:29.232 "claimed": true, 00:12:29.232 "claim_type": "exclusive_write", 00:12:29.232 "zoned": false, 00:12:29.232 "supported_io_types": { 00:12:29.232 "read": true, 00:12:29.232 "write": true, 00:12:29.232 "unmap": true, 00:12:29.232 "write_zeroes": true, 00:12:29.232 "flush": true, 00:12:29.232 "reset": true, 00:12:29.232 "compare": false, 00:12:29.232 "compare_and_write": false, 00:12:29.232 "abort": true, 00:12:29.232 "nvme_admin": false, 00:12:29.232 "nvme_io": false 00:12:29.232 }, 00:12:29.232 "memory_domains": [ 00:12:29.232 { 00:12:29.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.232 "dma_device_type": 2 00:12:29.232 } 00:12:29.232 ], 00:12:29.232 "driver_specific": {} 00:12:29.232 } 00:12:29.232 ] 00:12:29.232 20:39:12 -- common/autotest_common.sh@895 -- # return 0 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.232 20:39:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.491 20:39:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:29.491 "name": "Existed_Raid", 00:12:29.491 "uuid": "39aec47e-a4d5-456c-8bf0-44218ef30658", 00:12:29.491 "strip_size_kb": 0, 00:12:29.491 "state": "configuring", 00:12:29.491 "raid_level": "raid1", 00:12:29.491 "superblock": true, 00:12:29.491 "num_base_bdevs": 2, 00:12:29.491 "num_base_bdevs_discovered": 1, 00:12:29.491 "num_base_bdevs_operational": 2, 00:12:29.491 "base_bdevs_list": [ 00:12:29.491 { 00:12:29.491 "name": "BaseBdev1", 00:12:29.491 "uuid": "9febf31c-72e6-4112-b605-0851e7eb3a7c", 00:12:29.491 "is_configured": true, 00:12:29.491 "data_offset": 2048, 00:12:29.491 "data_size": 63488 00:12:29.491 }, 00:12:29.491 { 00:12:29.491 "name": "BaseBdev2", 00:12:29.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.491 "is_configured": false, 00:12:29.491 "data_offset": 0, 00:12:29.491 "data_size": 0 00:12:29.491 } 00:12:29.491 ] 00:12:29.491 }' 00:12:29.491 20:39:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:29.491 20:39:12 -- common/autotest_common.sh@10 -- # set +x 00:12:30.095 20:39:13 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:30.095 [2024-04-15 20:39:13.521087] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:30.095 [2024-04-15 20:39:13.521152] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027080 name Existed_Raid, state configuring 00:12:30.095 20:39:13 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:12:30.095 20:39:13 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:30.353 20:39:13 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:30.612 BaseBdev1 00:12:30.612 20:39:14 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:12:30.612 20:39:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:30.612 20:39:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:30.612 20:39:14 -- common/autotest_common.sh@889 -- # local i 00:12:30.612 20:39:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:30.612 20:39:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:30.612 20:39:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:30.871 20:39:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:31.130 [ 00:12:31.130 { 00:12:31.130 "name": "BaseBdev1", 00:12:31.130 "aliases": [ 00:12:31.130 "7d52b1cb-4f6b-4ea5-bfee-c901d6cdafc0" 00:12:31.130 ], 00:12:31.130 "product_name": "Malloc disk", 00:12:31.130 "block_size": 512, 00:12:31.130 "num_blocks": 65536, 00:12:31.130 "uuid": "7d52b1cb-4f6b-4ea5-bfee-c901d6cdafc0", 00:12:31.130 "assigned_rate_limits": { 00:12:31.130 "rw_ios_per_sec": 0, 00:12:31.130 "rw_mbytes_per_sec": 0, 00:12:31.130 "r_mbytes_per_sec": 0, 00:12:31.130 "w_mbytes_per_sec": 0 00:12:31.130 }, 00:12:31.130 "claimed": false, 00:12:31.130 "zoned": false, 00:12:31.130 "supported_io_types": { 00:12:31.130 "read": true, 00:12:31.130 "write": true, 00:12:31.130 "unmap": true, 00:12:31.130 "write_zeroes": true, 00:12:31.130 "flush": true, 00:12:31.130 "reset": true, 00:12:31.130 "compare": false, 00:12:31.130 "compare_and_write": false, 00:12:31.130 "abort": true, 00:12:31.130 "nvme_admin": false, 00:12:31.130 "nvme_io": false 00:12:31.130 }, 00:12:31.130 "memory_domains": [ 00:12:31.130 { 00:12:31.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.130 "dma_device_type": 2 00:12:31.130 } 00:12:31.130 ], 00:12:31.130 "driver_specific": {} 00:12:31.130 } 00:12:31.130 ] 00:12:31.130 20:39:14 -- common/autotest_common.sh@895 -- # return 0 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:31.130 [2024-04-15 20:39:14.581140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:31.130 [2024-04-15 20:39:14.582357] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:31.130 [2024-04-15 20:39:14.582414] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.130 20:39:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.389 20:39:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:31.389 "name": "Existed_Raid", 00:12:31.389 "uuid": "60b15899-89bc-4d4f-a3d1-bafdbec3832a", 00:12:31.389 "strip_size_kb": 0, 00:12:31.389 "state": "configuring", 00:12:31.389 "raid_level": "raid1", 00:12:31.389 "superblock": true, 00:12:31.389 "num_base_bdevs": 2, 00:12:31.389 "num_base_bdevs_discovered": 1, 00:12:31.389 "num_base_bdevs_operational": 2, 00:12:31.389 "base_bdevs_list": [ 00:12:31.389 { 00:12:31.389 "name": "BaseBdev1", 00:12:31.389 "uuid": "7d52b1cb-4f6b-4ea5-bfee-c901d6cdafc0", 00:12:31.389 "is_configured": true, 00:12:31.389 "data_offset": 2048, 00:12:31.389 "data_size": 63488 00:12:31.389 }, 00:12:31.389 { 00:12:31.389 "name": "BaseBdev2", 00:12:31.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.389 "is_configured": false, 00:12:31.389 "data_offset": 0, 00:12:31.389 "data_size": 0 00:12:31.389 } 00:12:31.389 ] 00:12:31.389 }' 00:12:31.389 20:39:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:31.389 20:39:14 -- common/autotest_common.sh@10 -- # set +x 00:12:31.957 20:39:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:32.214 BaseBdev2 00:12:32.214 [2024-04-15 20:39:15.479061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.214 [2024-04-15 20:39:15.479190] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:12:32.214 [2024-04-15 20:39:15.479202] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.214 [2024-04-15 20:39:15.479275] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:12:32.214 [2024-04-15 20:39:15.479450] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:12:32.214 [2024-04-15 20:39:15.479460] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:12:32.214 [2024-04-15 20:39:15.479539] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.214 20:39:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:32.214 20:39:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:32.214 20:39:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:32.214 20:39:15 -- common/autotest_common.sh@889 -- # local i 00:12:32.214 20:39:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:32.214 20:39:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:32.214 20:39:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:32.214 20:39:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:32.473 [ 00:12:32.473 { 00:12:32.473 "name": "BaseBdev2", 00:12:32.473 "aliases": [ 00:12:32.473 "c80014b4-9224-4b4c-9d58-de5f15328062" 00:12:32.473 ], 00:12:32.473 "product_name": "Malloc disk", 00:12:32.473 "block_size": 512, 00:12:32.473 "num_blocks": 65536, 00:12:32.473 "uuid": "c80014b4-9224-4b4c-9d58-de5f15328062", 00:12:32.473 "assigned_rate_limits": { 00:12:32.473 "rw_ios_per_sec": 0, 00:12:32.473 "rw_mbytes_per_sec": 0, 00:12:32.473 "r_mbytes_per_sec": 0, 00:12:32.473 "w_mbytes_per_sec": 0 00:12:32.473 }, 00:12:32.473 "claimed": true, 00:12:32.473 "claim_type": "exclusive_write", 00:12:32.473 "zoned": false, 00:12:32.473 "supported_io_types": { 00:12:32.473 "read": true, 00:12:32.473 "write": true, 00:12:32.473 "unmap": true, 00:12:32.473 "write_zeroes": true, 00:12:32.473 "flush": true, 00:12:32.473 "reset": true, 00:12:32.473 "compare": false, 00:12:32.473 "compare_and_write": false, 00:12:32.473 "abort": true, 00:12:32.473 "nvme_admin": false, 00:12:32.473 "nvme_io": false 00:12:32.473 }, 00:12:32.473 "memory_domains": [ 00:12:32.473 { 00:12:32.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.473 "dma_device_type": 2 00:12:32.473 } 00:12:32.473 ], 00:12:32.473 "driver_specific": {} 00:12:32.473 } 00:12:32.473 ] 00:12:32.473 20:39:15 -- common/autotest_common.sh@895 -- # return 0 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.473 20:39:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.731 20:39:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:32.731 "name": "Existed_Raid", 00:12:32.731 "uuid": "60b15899-89bc-4d4f-a3d1-bafdbec3832a", 00:12:32.731 "strip_size_kb": 0, 00:12:32.731 "state": "online", 00:12:32.731 "raid_level": "raid1", 00:12:32.731 "superblock": true, 00:12:32.731 "num_base_bdevs": 2, 00:12:32.731 "num_base_bdevs_discovered": 2, 00:12:32.731 "num_base_bdevs_operational": 2, 00:12:32.731 "base_bdevs_list": [ 00:12:32.731 { 00:12:32.731 "name": "BaseBdev1", 00:12:32.731 "uuid": "7d52b1cb-4f6b-4ea5-bfee-c901d6cdafc0", 00:12:32.731 "is_configured": true, 00:12:32.731 "data_offset": 2048, 00:12:32.731 "data_size": 63488 00:12:32.731 }, 00:12:32.731 { 00:12:32.731 "name": "BaseBdev2", 00:12:32.731 "uuid": "c80014b4-9224-4b4c-9d58-de5f15328062", 00:12:32.731 "is_configured": true, 00:12:32.731 "data_offset": 2048, 00:12:32.731 "data_size": 63488 00:12:32.731 } 00:12:32.731 ] 00:12:32.731 }' 00:12:32.731 20:39:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:32.731 20:39:16 -- common/autotest_common.sh@10 -- # set +x 00:12:33.299 20:39:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:33.299 [2024-04-15 20:39:16.697397] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@196 -- # return 0 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:33.557 "name": "Existed_Raid", 00:12:33.557 "uuid": "60b15899-89bc-4d4f-a3d1-bafdbec3832a", 00:12:33.557 "strip_size_kb": 0, 00:12:33.557 "state": "online", 00:12:33.557 "raid_level": "raid1", 00:12:33.557 "superblock": true, 00:12:33.557 "num_base_bdevs": 2, 00:12:33.557 "num_base_bdevs_discovered": 1, 00:12:33.557 "num_base_bdevs_operational": 1, 00:12:33.557 "base_bdevs_list": [ 00:12:33.557 { 00:12:33.557 "name": null, 00:12:33.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.557 "is_configured": false, 00:12:33.557 "data_offset": 2048, 00:12:33.557 "data_size": 63488 00:12:33.557 }, 00:12:33.557 { 00:12:33.557 "name": "BaseBdev2", 00:12:33.557 "uuid": "c80014b4-9224-4b4c-9d58-de5f15328062", 00:12:33.557 "is_configured": true, 00:12:33.557 "data_offset": 2048, 00:12:33.557 "data_size": 63488 00:12:33.557 } 00:12:33.557 ] 00:12:33.557 }' 00:12:33.557 20:39:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:33.557 20:39:16 -- common/autotest_common.sh@10 -- # set +x 00:12:34.124 20:39:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:34.124 20:39:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:34.124 20:39:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.124 20:39:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:34.383 20:39:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:34.383 20:39:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.383 20:39:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:34.644 [2024-04-15 20:39:17.945060] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.644 [2024-04-15 20:39:17.945093] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.644 [2024-04-15 20:39:17.945138] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.644 [2024-04-15 20:39:18.032743] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.644 [2024-04-15 20:39:18.032794] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:12:34.644 20:39:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:34.644 20:39:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:34.644 20:39:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:34.644 20:39:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.915 20:39:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:34.915 20:39:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:34.915 20:39:18 -- bdev/bdev_raid.sh@287 -- # killprocess 48332 00:12:34.915 20:39:18 -- common/autotest_common.sh@926 -- # '[' -z 48332 ']' 00:12:34.915 20:39:18 -- common/autotest_common.sh@930 -- # kill -0 48332 00:12:34.915 20:39:18 -- common/autotest_common.sh@931 -- # uname 00:12:34.915 20:39:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:34.915 20:39:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 48332 00:12:34.915 killing process with pid 48332 00:12:34.915 20:39:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:34.915 20:39:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:34.915 20:39:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48332' 00:12:34.915 20:39:18 -- common/autotest_common.sh@945 -- # kill 48332 00:12:34.915 [2024-04-15 20:39:18.262135] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:34.915 20:39:18 -- common/autotest_common.sh@950 -- # wait 48332 00:12:34.915 [2024-04-15 20:39:18.262225] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.293 ************************************ 00:12:36.293 END TEST raid_state_function_test_sb 00:12:36.293 ************************************ 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:36.293 00:12:36.293 real 0m10.262s 00:12:36.293 user 0m16.956s 00:12:36.293 sys 0m1.266s 00:12:36.293 20:39:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.293 20:39:19 -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:12:36.293 20:39:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:36.293 20:39:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.293 20:39:19 -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 ************************************ 00:12:36.293 START TEST raid_superblock_test 00:12:36.293 ************************************ 00:12:36.293 20:39:19 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:12:36.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@357 -- # raid_pid=48658 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@358 -- # waitforlisten 48658 /var/tmp/spdk-raid.sock 00:12:36.293 20:39:19 -- common/autotest_common.sh@819 -- # '[' -z 48658 ']' 00:12:36.293 20:39:19 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:36.293 20:39:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:36.293 20:39:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:36.293 20:39:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:36.293 20:39:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:36.293 20:39:19 -- common/autotest_common.sh@10 -- # set +x 00:12:36.293 [2024-04-15 20:39:19.737895] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:12:36.293 [2024-04-15 20:39:19.738048] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48658 ] 00:12:36.553 [2024-04-15 20:39:19.893588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.813 [2024-04-15 20:39:20.087739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.813 [2024-04-15 20:39:20.278380] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.072 20:39:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:37.072 20:39:20 -- common/autotest_common.sh@852 -- # return 0 00:12:37.072 20:39:20 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:12:37.072 20:39:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:37.072 20:39:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:12:37.072 20:39:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:12:37.072 20:39:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:37.072 20:39:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:37.072 20:39:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:37.072 20:39:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:37.072 20:39:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:37.332 malloc1 00:12:37.332 20:39:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:37.591 [2024-04-15 20:39:20.872893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:37.591 [2024-04-15 20:39:20.873002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.591 [2024-04-15 20:39:20.873055] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:12:37.591 [2024-04-15 20:39:20.873094] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.591 [2024-04-15 20:39:20.874769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.591 [2024-04-15 20:39:20.874809] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:37.591 pt1 00:12:37.591 20:39:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:37.591 20:39:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:37.591 20:39:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:12:37.591 20:39:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:12:37.591 20:39:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:37.591 20:39:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:37.591 20:39:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:37.591 20:39:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:37.591 20:39:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:37.850 malloc2 00:12:37.851 20:39:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:37.851 [2024-04-15 20:39:21.238896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:37.851 [2024-04-15 20:39:21.238974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.851 [2024-04-15 20:39:21.239012] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:12:37.851 [2024-04-15 20:39:21.239045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.851 [2024-04-15 20:39:21.240683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.851 [2024-04-15 20:39:21.240721] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:37.851 pt2 00:12:37.851 20:39:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:37.851 20:39:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:37.851 20:39:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:12:38.110 [2024-04-15 20:39:21.390774] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:38.110 [2024-04-15 20:39:21.392613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:38.110 [2024-04-15 20:39:21.393012] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002a380 00:12:38.110 [2024-04-15 20:39:21.393050] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:38.110 [2024-04-15 20:39:21.393334] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:12:38.110 [2024-04-15 20:39:21.393921] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002a380 00:12:38.110 [2024-04-15 20:39:21.393958] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002a380 00:12:38.110 [2024-04-15 20:39:21.394221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:38.110 "name": "raid_bdev1", 00:12:38.110 "uuid": "d268eb50-2175-4e10-ad0e-fec53d8e0396", 00:12:38.110 "strip_size_kb": 0, 00:12:38.110 "state": "online", 00:12:38.110 "raid_level": "raid1", 00:12:38.110 "superblock": true, 00:12:38.110 "num_base_bdevs": 2, 00:12:38.110 "num_base_bdevs_discovered": 2, 00:12:38.110 "num_base_bdevs_operational": 2, 00:12:38.110 "base_bdevs_list": [ 00:12:38.110 { 00:12:38.110 "name": "pt1", 00:12:38.110 "uuid": "7f60b92b-6bdf-5dcd-b9a5-b22b1a2d03b7", 00:12:38.110 "is_configured": true, 00:12:38.110 "data_offset": 2048, 00:12:38.110 "data_size": 63488 00:12:38.110 }, 00:12:38.110 { 00:12:38.110 "name": "pt2", 00:12:38.110 "uuid": "74605568-3e10-55c2-8278-dd018b4ab75e", 00:12:38.110 "is_configured": true, 00:12:38.110 "data_offset": 2048, 00:12:38.110 "data_size": 63488 00:12:38.110 } 00:12:38.110 ] 00:12:38.110 }' 00:12:38.110 20:39:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:38.110 20:39:21 -- common/autotest_common.sh@10 -- # set +x 00:12:38.678 20:39:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:38.678 20:39:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:12:38.948 [2024-04-15 20:39:22.249517] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.948 20:39:22 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d268eb50-2175-4e10-ad0e-fec53d8e0396 00:12:38.948 20:39:22 -- bdev/bdev_raid.sh@380 -- # '[' -z d268eb50-2175-4e10-ad0e-fec53d8e0396 ']' 00:12:38.948 20:39:22 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:38.948 [2024-04-15 20:39:22.413131] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.948 [2024-04-15 20:39:22.413163] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.948 [2024-04-15 20:39:22.413225] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.948 [2024-04-15 20:39:22.413259] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.948 [2024-04-15 20:39:22.413268] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a380 name raid_bdev1, state offline 00:12:38.948 20:39:22 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:12:38.948 20:39:22 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.208 20:39:22 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:12:39.208 20:39:22 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:12:39.208 20:39:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:39.208 20:39:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:39.467 20:39:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:39.467 20:39:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:39.467 20:39:22 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:39.467 20:39:22 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:39.726 20:39:23 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:12:39.726 20:39:23 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:12:39.726 20:39:23 -- common/autotest_common.sh@640 -- # local es=0 00:12:39.726 20:39:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:12:39.726 20:39:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.726 20:39:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:39.726 20:39:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.726 20:39:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:39.726 20:39:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.726 20:39:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:39.726 20:39:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.726 20:39:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:39.726 20:39:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:12:39.985 [2024-04-15 20:39:23.263847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:39.985 [2024-04-15 20:39:23.265242] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:39.985 [2024-04-15 20:39:23.265286] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:12:39.985 [2024-04-15 20:39:23.265342] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:12:39.985 [2024-04-15 20:39:23.265368] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:39.985 [2024-04-15 20:39:23.265378] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002a980 name raid_bdev1, state configuring 00:12:39.985 request: 00:12:39.985 { 00:12:39.985 "name": "raid_bdev1", 00:12:39.985 "raid_level": "raid1", 00:12:39.985 "base_bdevs": [ 00:12:39.985 "malloc1", 00:12:39.985 "malloc2" 00:12:39.985 ], 00:12:39.985 "superblock": false, 00:12:39.985 "method": "bdev_raid_create", 00:12:39.985 "req_id": 1 00:12:39.985 } 00:12:39.985 Got JSON-RPC error response 00:12:39.985 response: 00:12:39.985 { 00:12:39.985 "code": -17, 00:12:39.985 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:39.985 } 00:12:39.985 20:39:23 -- common/autotest_common.sh@643 -- # es=1 00:12:39.985 20:39:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:39.985 20:39:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:39.985 20:39:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:39.985 20:39:23 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:12:39.985 20:39:23 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.985 20:39:23 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:12:39.985 20:39:23 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:12:39.985 20:39:23 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:40.244 [2024-04-15 20:39:23.631282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:40.244 [2024-04-15 20:39:23.631372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.245 [2024-04-15 20:39:23.631407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002b880 00:12:40.245 [2024-04-15 20:39:23.631433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.245 [2024-04-15 20:39:23.633106] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.245 [2024-04-15 20:39:23.633154] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:40.245 [2024-04-15 20:39:23.633222] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:12:40.245 [2024-04-15 20:39:23.633281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:40.245 pt1 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.245 20:39:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.504 20:39:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:40.504 "name": "raid_bdev1", 00:12:40.504 "uuid": "d268eb50-2175-4e10-ad0e-fec53d8e0396", 00:12:40.504 "strip_size_kb": 0, 00:12:40.504 "state": "configuring", 00:12:40.504 "raid_level": "raid1", 00:12:40.504 "superblock": true, 00:12:40.504 "num_base_bdevs": 2, 00:12:40.504 "num_base_bdevs_discovered": 1, 00:12:40.504 "num_base_bdevs_operational": 2, 00:12:40.504 "base_bdevs_list": [ 00:12:40.504 { 00:12:40.504 "name": "pt1", 00:12:40.504 "uuid": "7f60b92b-6bdf-5dcd-b9a5-b22b1a2d03b7", 00:12:40.504 "is_configured": true, 00:12:40.504 "data_offset": 2048, 00:12:40.504 "data_size": 63488 00:12:40.504 }, 00:12:40.504 { 00:12:40.504 "name": null, 00:12:40.504 "uuid": "74605568-3e10-55c2-8278-dd018b4ab75e", 00:12:40.504 "is_configured": false, 00:12:40.504 "data_offset": 2048, 00:12:40.504 "data_size": 63488 00:12:40.504 } 00:12:40.504 ] 00:12:40.504 }' 00:12:40.504 20:39:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:40.504 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:41.072 [2024-04-15 20:39:24.426071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:41.072 [2024-04-15 20:39:24.426154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.072 [2024-04-15 20:39:24.426187] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d380 00:12:41.072 [2024-04-15 20:39:24.426212] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.072 [2024-04-15 20:39:24.426498] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.072 [2024-04-15 20:39:24.426526] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:41.072 [2024-04-15 20:39:24.426596] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:41.072 [2024-04-15 20:39:24.426612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:41.072 [2024-04-15 20:39:24.426835] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002cd80 00:12:41.072 [2024-04-15 20:39:24.426852] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:41.072 [2024-04-15 20:39:24.426957] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:12:41.072 [2024-04-15 20:39:24.427111] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002cd80 00:12:41.072 [2024-04-15 20:39:24.427120] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002cd80 00:12:41.072 [2024-04-15 20:39:24.427211] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.072 pt2 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.072 20:39:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.331 20:39:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:41.331 "name": "raid_bdev1", 00:12:41.331 "uuid": "d268eb50-2175-4e10-ad0e-fec53d8e0396", 00:12:41.331 "strip_size_kb": 0, 00:12:41.331 "state": "online", 00:12:41.331 "raid_level": "raid1", 00:12:41.331 "superblock": true, 00:12:41.331 "num_base_bdevs": 2, 00:12:41.331 "num_base_bdevs_discovered": 2, 00:12:41.331 "num_base_bdevs_operational": 2, 00:12:41.331 "base_bdevs_list": [ 00:12:41.331 { 00:12:41.331 "name": "pt1", 00:12:41.331 "uuid": "7f60b92b-6bdf-5dcd-b9a5-b22b1a2d03b7", 00:12:41.331 "is_configured": true, 00:12:41.331 "data_offset": 2048, 00:12:41.331 "data_size": 63488 00:12:41.331 }, 00:12:41.331 { 00:12:41.331 "name": "pt2", 00:12:41.331 "uuid": "74605568-3e10-55c2-8278-dd018b4ab75e", 00:12:41.331 "is_configured": true, 00:12:41.331 "data_offset": 2048, 00:12:41.331 "data_size": 63488 00:12:41.331 } 00:12:41.331 ] 00:12:41.331 }' 00:12:41.331 20:39:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:41.331 20:39:24 -- common/autotest_common.sh@10 -- # set +x 00:12:41.590 20:39:25 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:41.590 20:39:25 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:12:41.849 [2024-04-15 20:39:25.229010] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.849 20:39:25 -- bdev/bdev_raid.sh@430 -- # '[' d268eb50-2175-4e10-ad0e-fec53d8e0396 '!=' d268eb50-2175-4e10-ad0e-fec53d8e0396 ']' 00:12:41.849 20:39:25 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:12:41.849 20:39:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:41.849 20:39:25 -- bdev/bdev_raid.sh@196 -- # return 0 00:12:41.849 20:39:25 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:42.108 [2024-04-15 20:39:25.400691] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:42.108 "name": "raid_bdev1", 00:12:42.108 "uuid": "d268eb50-2175-4e10-ad0e-fec53d8e0396", 00:12:42.108 "strip_size_kb": 0, 00:12:42.108 "state": "online", 00:12:42.108 "raid_level": "raid1", 00:12:42.108 "superblock": true, 00:12:42.108 "num_base_bdevs": 2, 00:12:42.108 "num_base_bdevs_discovered": 1, 00:12:42.108 "num_base_bdevs_operational": 1, 00:12:42.108 "base_bdevs_list": [ 00:12:42.108 { 00:12:42.108 "name": null, 00:12:42.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.108 "is_configured": false, 00:12:42.108 "data_offset": 2048, 00:12:42.108 "data_size": 63488 00:12:42.108 }, 00:12:42.108 { 00:12:42.108 "name": "pt2", 00:12:42.108 "uuid": "74605568-3e10-55c2-8278-dd018b4ab75e", 00:12:42.108 "is_configured": true, 00:12:42.108 "data_offset": 2048, 00:12:42.108 "data_size": 63488 00:12:42.108 } 00:12:42.108 ] 00:12:42.108 }' 00:12:42.108 20:39:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:42.108 20:39:25 -- common/autotest_common.sh@10 -- # set +x 00:12:42.676 20:39:26 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:42.935 [2024-04-15 20:39:26.195428] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:42.935 [2024-04-15 20:39:26.195459] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.935 [2024-04-15 20:39:26.195498] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.935 [2024-04-15 20:39:26.195523] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.935 [2024-04-15 20:39:26.195531] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002cd80 name raid_bdev1, state offline 00:12:42.935 20:39:26 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.935 20:39:26 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:12:42.935 20:39:26 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:12:42.935 20:39:26 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:12:42.935 20:39:26 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:12:42.935 20:39:26 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:12:42.935 20:39:26 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@462 -- # i=1 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:43.194 [2024-04-15 20:39:26.674738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:43.194 [2024-04-15 20:39:26.674828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.194 [2024-04-15 20:39:26.674863] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002e880 00:12:43.194 [2024-04-15 20:39:26.674890] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.194 [2024-04-15 20:39:26.676572] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.194 [2024-04-15 20:39:26.676621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:43.194 [2024-04-15 20:39:26.676717] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:43.194 [2024-04-15 20:39:26.676785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:43.194 [2024-04-15 20:39:26.676847] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000030080 00:12:43.194 [2024-04-15 20:39:26.676855] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:43.194 [2024-04-15 20:39:26.676929] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:12:43.194 [2024-04-15 20:39:26.677107] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000030080 00:12:43.194 [2024-04-15 20:39:26.677117] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000030080 00:12:43.194 [2024-04-15 20:39:26.677209] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.194 pt2 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.194 20:39:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.452 20:39:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:43.452 "name": "raid_bdev1", 00:12:43.452 "uuid": "d268eb50-2175-4e10-ad0e-fec53d8e0396", 00:12:43.452 "strip_size_kb": 0, 00:12:43.452 "state": "online", 00:12:43.452 "raid_level": "raid1", 00:12:43.452 "superblock": true, 00:12:43.452 "num_base_bdevs": 2, 00:12:43.452 "num_base_bdevs_discovered": 1, 00:12:43.452 "num_base_bdevs_operational": 1, 00:12:43.452 "base_bdevs_list": [ 00:12:43.452 { 00:12:43.452 "name": null, 00:12:43.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.452 "is_configured": false, 00:12:43.452 "data_offset": 2048, 00:12:43.452 "data_size": 63488 00:12:43.452 }, 00:12:43.452 { 00:12:43.452 "name": "pt2", 00:12:43.452 "uuid": "74605568-3e10-55c2-8278-dd018b4ab75e", 00:12:43.452 "is_configured": true, 00:12:43.452 "data_offset": 2048, 00:12:43.452 "data_size": 63488 00:12:43.452 } 00:12:43.452 ] 00:12:43.452 }' 00:12:43.452 20:39:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:43.452 20:39:26 -- common/autotest_common.sh@10 -- # set +x 00:12:44.041 20:39:27 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:12:44.041 20:39:27 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:44.041 20:39:27 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:12:44.300 [2024-04-15 20:39:27.657334] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.300 20:39:27 -- bdev/bdev_raid.sh@506 -- # '[' d268eb50-2175-4e10-ad0e-fec53d8e0396 '!=' d268eb50-2175-4e10-ad0e-fec53d8e0396 ']' 00:12:44.300 20:39:27 -- bdev/bdev_raid.sh@511 -- # killprocess 48658 00:12:44.300 20:39:27 -- common/autotest_common.sh@926 -- # '[' -z 48658 ']' 00:12:44.300 20:39:27 -- common/autotest_common.sh@930 -- # kill -0 48658 00:12:44.300 20:39:27 -- common/autotest_common.sh@931 -- # uname 00:12:44.300 20:39:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:44.300 20:39:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 48658 00:12:44.300 20:39:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:44.300 20:39:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:44.300 killing process with pid 48658 00:12:44.300 20:39:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48658' 00:12:44.300 20:39:27 -- common/autotest_common.sh@945 -- # kill 48658 00:12:44.300 20:39:27 -- common/autotest_common.sh@950 -- # wait 48658 00:12:44.300 [2024-04-15 20:39:27.709991] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.300 [2024-04-15 20:39:27.710057] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.300 [2024-04-15 20:39:27.710086] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.300 [2024-04-15 20:39:27.710095] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000030080 name raid_bdev1, state offline 00:12:44.558 [2024-04-15 20:39:27.883002] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.936 ************************************ 00:12:45.936 END TEST raid_superblock_test 00:12:45.936 ************************************ 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:12:45.936 00:12:45.936 real 0m9.567s 00:12:45.936 user 0m16.341s 00:12:45.936 sys 0m1.268s 00:12:45.936 20:39:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.936 20:39:29 -- common/autotest_common.sh@10 -- # set +x 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:12:45.936 20:39:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:45.936 20:39:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:45.936 20:39:29 -- common/autotest_common.sh@10 -- # set +x 00:12:45.936 ************************************ 00:12:45.936 START TEST raid_state_function_test 00:12:45.936 ************************************ 00:12:45.936 20:39:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:45.936 20:39:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:12:45.937 Process raid pid: 48986 00:12:45.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=48986 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 48986' 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 48986 /var/tmp/spdk-raid.sock 00:12:45.937 20:39:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:45.937 20:39:29 -- common/autotest_common.sh@819 -- # '[' -z 48986 ']' 00:12:45.937 20:39:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:45.937 20:39:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:45.937 20:39:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:45.937 20:39:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:45.937 20:39:29 -- common/autotest_common.sh@10 -- # set +x 00:12:45.937 [2024-04-15 20:39:29.373546] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:12:45.937 [2024-04-15 20:39:29.373891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.196 [2024-04-15 20:39:29.530237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.454 [2024-04-15 20:39:29.722729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.454 [2024-04-15 20:39:29.918692] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.394 20:39:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:47.394 20:39:30 -- common/autotest_common.sh@852 -- # return 0 00:12:47.394 20:39:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:47.657 [2024-04-15 20:39:30.901483] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.657 [2024-04-15 20:39:30.901554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.657 [2024-04-15 20:39:30.901566] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.657 [2024-04-15 20:39:30.901583] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.657 [2024-04-15 20:39:30.901590] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.657 [2024-04-15 20:39:30.901627] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.657 20:39:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.657 20:39:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:47.657 "name": "Existed_Raid", 00:12:47.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.657 "strip_size_kb": 64, 00:12:47.657 "state": "configuring", 00:12:47.657 "raid_level": "raid0", 00:12:47.657 "superblock": false, 00:12:47.657 "num_base_bdevs": 3, 00:12:47.657 "num_base_bdevs_discovered": 0, 00:12:47.657 "num_base_bdevs_operational": 3, 00:12:47.657 "base_bdevs_list": [ 00:12:47.657 { 00:12:47.657 "name": "BaseBdev1", 00:12:47.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.657 "is_configured": false, 00:12:47.657 "data_offset": 0, 00:12:47.657 "data_size": 0 00:12:47.657 }, 00:12:47.657 { 00:12:47.657 "name": "BaseBdev2", 00:12:47.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.657 "is_configured": false, 00:12:47.657 "data_offset": 0, 00:12:47.657 "data_size": 0 00:12:47.657 }, 00:12:47.657 { 00:12:47.657 "name": "BaseBdev3", 00:12:47.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.657 "is_configured": false, 00:12:47.657 "data_offset": 0, 00:12:47.658 "data_size": 0 00:12:47.658 } 00:12:47.658 ] 00:12:47.658 }' 00:12:47.658 20:39:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:47.658 20:39:31 -- common/autotest_common.sh@10 -- # set +x 00:12:48.242 20:39:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:48.513 [2024-04-15 20:39:31.768147] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.513 [2024-04-15 20:39:31.768188] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:12:48.513 20:39:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:48.513 [2024-04-15 20:39:31.935919] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.513 [2024-04-15 20:39:31.935992] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.513 [2024-04-15 20:39:31.936003] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.513 [2024-04-15 20:39:31.936019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.513 [2024-04-15 20:39:31.936026] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.513 [2024-04-15 20:39:31.936053] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.513 20:39:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.786 [2024-04-15 20:39:32.129114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.786 BaseBdev1 00:12:48.786 20:39:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:48.786 20:39:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:48.786 20:39:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:48.786 20:39:32 -- common/autotest_common.sh@889 -- # local i 00:12:48.786 20:39:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:48.786 20:39:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:48.786 20:39:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:49.060 20:39:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.060 [ 00:12:49.060 { 00:12:49.060 "name": "BaseBdev1", 00:12:49.060 "aliases": [ 00:12:49.060 "61dcc506-6752-4923-972c-eefdbaa59b35" 00:12:49.060 ], 00:12:49.060 "product_name": "Malloc disk", 00:12:49.060 "block_size": 512, 00:12:49.060 "num_blocks": 65536, 00:12:49.060 "uuid": "61dcc506-6752-4923-972c-eefdbaa59b35", 00:12:49.060 "assigned_rate_limits": { 00:12:49.060 "rw_ios_per_sec": 0, 00:12:49.060 "rw_mbytes_per_sec": 0, 00:12:49.060 "r_mbytes_per_sec": 0, 00:12:49.060 "w_mbytes_per_sec": 0 00:12:49.060 }, 00:12:49.060 "claimed": true, 00:12:49.060 "claim_type": "exclusive_write", 00:12:49.060 "zoned": false, 00:12:49.060 "supported_io_types": { 00:12:49.060 "read": true, 00:12:49.060 "write": true, 00:12:49.060 "unmap": true, 00:12:49.060 "write_zeroes": true, 00:12:49.060 "flush": true, 00:12:49.060 "reset": true, 00:12:49.060 "compare": false, 00:12:49.060 "compare_and_write": false, 00:12:49.060 "abort": true, 00:12:49.060 "nvme_admin": false, 00:12:49.060 "nvme_io": false 00:12:49.060 }, 00:12:49.060 "memory_domains": [ 00:12:49.060 { 00:12:49.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.060 "dma_device_type": 2 00:12:49.060 } 00:12:49.060 ], 00:12:49.060 "driver_specific": {} 00:12:49.060 } 00:12:49.060 ] 00:12:49.060 20:39:32 -- common/autotest_common.sh@895 -- # return 0 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.060 20:39:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.323 20:39:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:49.323 "name": "Existed_Raid", 00:12:49.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.323 "strip_size_kb": 64, 00:12:49.323 "state": "configuring", 00:12:49.323 "raid_level": "raid0", 00:12:49.323 "superblock": false, 00:12:49.323 "num_base_bdevs": 3, 00:12:49.323 "num_base_bdevs_discovered": 1, 00:12:49.323 "num_base_bdevs_operational": 3, 00:12:49.323 "base_bdevs_list": [ 00:12:49.323 { 00:12:49.323 "name": "BaseBdev1", 00:12:49.323 "uuid": "61dcc506-6752-4923-972c-eefdbaa59b35", 00:12:49.323 "is_configured": true, 00:12:49.323 "data_offset": 0, 00:12:49.323 "data_size": 65536 00:12:49.323 }, 00:12:49.323 { 00:12:49.323 "name": "BaseBdev2", 00:12:49.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.323 "is_configured": false, 00:12:49.323 "data_offset": 0, 00:12:49.323 "data_size": 0 00:12:49.323 }, 00:12:49.323 { 00:12:49.323 "name": "BaseBdev3", 00:12:49.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.323 "is_configured": false, 00:12:49.323 "data_offset": 0, 00:12:49.323 "data_size": 0 00:12:49.323 } 00:12:49.323 ] 00:12:49.323 }' 00:12:49.323 20:39:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:49.323 20:39:32 -- common/autotest_common.sh@10 -- # set +x 00:12:49.889 20:39:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:50.147 [2024-04-15 20:39:33.455208] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:50.147 [2024-04-15 20:39:33.455263] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:50.147 [2024-04-15 20:39:33.607024] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.147 [2024-04-15 20:39:33.608258] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:50.147 [2024-04-15 20:39:33.608316] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:50.147 [2024-04-15 20:39:33.608326] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:50.147 [2024-04-15 20:39:33.608353] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.147 20:39:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.406 20:39:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:50.406 "name": "Existed_Raid", 00:12:50.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.406 "strip_size_kb": 64, 00:12:50.406 "state": "configuring", 00:12:50.406 "raid_level": "raid0", 00:12:50.406 "superblock": false, 00:12:50.406 "num_base_bdevs": 3, 00:12:50.406 "num_base_bdevs_discovered": 1, 00:12:50.406 "num_base_bdevs_operational": 3, 00:12:50.406 "base_bdevs_list": [ 00:12:50.406 { 00:12:50.406 "name": "BaseBdev1", 00:12:50.406 "uuid": "61dcc506-6752-4923-972c-eefdbaa59b35", 00:12:50.406 "is_configured": true, 00:12:50.406 "data_offset": 0, 00:12:50.406 "data_size": 65536 00:12:50.406 }, 00:12:50.406 { 00:12:50.406 "name": "BaseBdev2", 00:12:50.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.406 "is_configured": false, 00:12:50.406 "data_offset": 0, 00:12:50.406 "data_size": 0 00:12:50.406 }, 00:12:50.406 { 00:12:50.406 "name": "BaseBdev3", 00:12:50.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.406 "is_configured": false, 00:12:50.406 "data_offset": 0, 00:12:50.406 "data_size": 0 00:12:50.406 } 00:12:50.406 ] 00:12:50.406 }' 00:12:50.406 20:39:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:50.406 20:39:33 -- common/autotest_common.sh@10 -- # set +x 00:12:50.973 20:39:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:51.231 [2024-04-15 20:39:34.606388] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.231 BaseBdev2 00:12:51.231 20:39:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:51.231 20:39:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:51.231 20:39:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:51.231 20:39:34 -- common/autotest_common.sh@889 -- # local i 00:12:51.231 20:39:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:51.231 20:39:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:51.231 20:39:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:51.489 20:39:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:51.489 [ 00:12:51.489 { 00:12:51.490 "name": "BaseBdev2", 00:12:51.490 "aliases": [ 00:12:51.490 "185b30af-0357-44e5-a152-4ce3c75cca52" 00:12:51.490 ], 00:12:51.490 "product_name": "Malloc disk", 00:12:51.490 "block_size": 512, 00:12:51.490 "num_blocks": 65536, 00:12:51.490 "uuid": "185b30af-0357-44e5-a152-4ce3c75cca52", 00:12:51.490 "assigned_rate_limits": { 00:12:51.490 "rw_ios_per_sec": 0, 00:12:51.490 "rw_mbytes_per_sec": 0, 00:12:51.490 "r_mbytes_per_sec": 0, 00:12:51.490 "w_mbytes_per_sec": 0 00:12:51.490 }, 00:12:51.490 "claimed": true, 00:12:51.490 "claim_type": "exclusive_write", 00:12:51.490 "zoned": false, 00:12:51.490 "supported_io_types": { 00:12:51.490 "read": true, 00:12:51.490 "write": true, 00:12:51.490 "unmap": true, 00:12:51.490 "write_zeroes": true, 00:12:51.490 "flush": true, 00:12:51.490 "reset": true, 00:12:51.490 "compare": false, 00:12:51.490 "compare_and_write": false, 00:12:51.490 "abort": true, 00:12:51.490 "nvme_admin": false, 00:12:51.490 "nvme_io": false 00:12:51.490 }, 00:12:51.490 "memory_domains": [ 00:12:51.490 { 00:12:51.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.490 "dma_device_type": 2 00:12:51.490 } 00:12:51.490 ], 00:12:51.490 "driver_specific": {} 00:12:51.490 } 00:12:51.490 ] 00:12:51.490 20:39:34 -- common/autotest_common.sh@895 -- # return 0 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.490 20:39:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.748 20:39:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:51.748 "name": "Existed_Raid", 00:12:51.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.748 "strip_size_kb": 64, 00:12:51.748 "state": "configuring", 00:12:51.748 "raid_level": "raid0", 00:12:51.748 "superblock": false, 00:12:51.748 "num_base_bdevs": 3, 00:12:51.748 "num_base_bdevs_discovered": 2, 00:12:51.748 "num_base_bdevs_operational": 3, 00:12:51.748 "base_bdevs_list": [ 00:12:51.748 { 00:12:51.748 "name": "BaseBdev1", 00:12:51.748 "uuid": "61dcc506-6752-4923-972c-eefdbaa59b35", 00:12:51.748 "is_configured": true, 00:12:51.748 "data_offset": 0, 00:12:51.748 "data_size": 65536 00:12:51.748 }, 00:12:51.748 { 00:12:51.748 "name": "BaseBdev2", 00:12:51.748 "uuid": "185b30af-0357-44e5-a152-4ce3c75cca52", 00:12:51.748 "is_configured": true, 00:12:51.748 "data_offset": 0, 00:12:51.748 "data_size": 65536 00:12:51.748 }, 00:12:51.748 { 00:12:51.748 "name": "BaseBdev3", 00:12:51.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.748 "is_configured": false, 00:12:51.748 "data_offset": 0, 00:12:51.748 "data_size": 0 00:12:51.748 } 00:12:51.748 ] 00:12:51.748 }' 00:12:51.748 20:39:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:51.748 20:39:35 -- common/autotest_common.sh@10 -- # set +x 00:12:52.367 20:39:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:52.367 [2024-04-15 20:39:35.863999] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.367 [2024-04-15 20:39:35.864039] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:12:52.367 [2024-04-15 20:39:35.864047] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:52.367 [2024-04-15 20:39:35.864136] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:12:52.367 [2024-04-15 20:39:35.864312] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:12:52.367 [2024-04-15 20:39:35.864322] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:12:52.367 [2024-04-15 20:39:35.864486] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.625 BaseBdev3 00:12:52.625 20:39:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:12:52.625 20:39:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:12:52.625 20:39:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:52.625 20:39:35 -- common/autotest_common.sh@889 -- # local i 00:12:52.625 20:39:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:52.625 20:39:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:52.625 20:39:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:52.625 20:39:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:52.884 [ 00:12:52.884 { 00:12:52.885 "name": "BaseBdev3", 00:12:52.885 "aliases": [ 00:12:52.885 "cf81b170-42b9-4084-afcd-f8067aad93d1" 00:12:52.885 ], 00:12:52.885 "product_name": "Malloc disk", 00:12:52.885 "block_size": 512, 00:12:52.885 "num_blocks": 65536, 00:12:52.885 "uuid": "cf81b170-42b9-4084-afcd-f8067aad93d1", 00:12:52.885 "assigned_rate_limits": { 00:12:52.885 "rw_ios_per_sec": 0, 00:12:52.885 "rw_mbytes_per_sec": 0, 00:12:52.885 "r_mbytes_per_sec": 0, 00:12:52.885 "w_mbytes_per_sec": 0 00:12:52.885 }, 00:12:52.885 "claimed": true, 00:12:52.885 "claim_type": "exclusive_write", 00:12:52.885 "zoned": false, 00:12:52.885 "supported_io_types": { 00:12:52.885 "read": true, 00:12:52.885 "write": true, 00:12:52.885 "unmap": true, 00:12:52.885 "write_zeroes": true, 00:12:52.885 "flush": true, 00:12:52.885 "reset": true, 00:12:52.885 "compare": false, 00:12:52.885 "compare_and_write": false, 00:12:52.885 "abort": true, 00:12:52.885 "nvme_admin": false, 00:12:52.885 "nvme_io": false 00:12:52.885 }, 00:12:52.885 "memory_domains": [ 00:12:52.885 { 00:12:52.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.885 "dma_device_type": 2 00:12:52.885 } 00:12:52.885 ], 00:12:52.885 "driver_specific": {} 00:12:52.885 } 00:12:52.885 ] 00:12:52.885 20:39:36 -- common/autotest_common.sh@895 -- # return 0 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:52.885 "name": "Existed_Raid", 00:12:52.885 "uuid": "5ae1f762-65c7-42ac-bef9-6d6f6b3be00e", 00:12:52.885 "strip_size_kb": 64, 00:12:52.885 "state": "online", 00:12:52.885 "raid_level": "raid0", 00:12:52.885 "superblock": false, 00:12:52.885 "num_base_bdevs": 3, 00:12:52.885 "num_base_bdevs_discovered": 3, 00:12:52.885 "num_base_bdevs_operational": 3, 00:12:52.885 "base_bdevs_list": [ 00:12:52.885 { 00:12:52.885 "name": "BaseBdev1", 00:12:52.885 "uuid": "61dcc506-6752-4923-972c-eefdbaa59b35", 00:12:52.885 "is_configured": true, 00:12:52.885 "data_offset": 0, 00:12:52.885 "data_size": 65536 00:12:52.885 }, 00:12:52.885 { 00:12:52.885 "name": "BaseBdev2", 00:12:52.885 "uuid": "185b30af-0357-44e5-a152-4ce3c75cca52", 00:12:52.885 "is_configured": true, 00:12:52.885 "data_offset": 0, 00:12:52.885 "data_size": 65536 00:12:52.885 }, 00:12:52.885 { 00:12:52.885 "name": "BaseBdev3", 00:12:52.885 "uuid": "cf81b170-42b9-4084-afcd-f8067aad93d1", 00:12:52.885 "is_configured": true, 00:12:52.885 "data_offset": 0, 00:12:52.885 "data_size": 65536 00:12:52.885 } 00:12:52.885 ] 00:12:52.885 }' 00:12:52.885 20:39:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:52.885 20:39:36 -- common/autotest_common.sh@10 -- # set +x 00:12:53.453 20:39:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:53.712 [2024-04-15 20:39:37.118237] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:53.712 [2024-04-15 20:39:37.118275] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:53.712 [2024-04-15 20:39:37.118320] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:53.972 "name": "Existed_Raid", 00:12:53.972 "uuid": "5ae1f762-65c7-42ac-bef9-6d6f6b3be00e", 00:12:53.972 "strip_size_kb": 64, 00:12:53.972 "state": "offline", 00:12:53.972 "raid_level": "raid0", 00:12:53.972 "superblock": false, 00:12:53.972 "num_base_bdevs": 3, 00:12:53.972 "num_base_bdevs_discovered": 2, 00:12:53.972 "num_base_bdevs_operational": 2, 00:12:53.972 "base_bdevs_list": [ 00:12:53.972 { 00:12:53.972 "name": null, 00:12:53.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.972 "is_configured": false, 00:12:53.972 "data_offset": 0, 00:12:53.972 "data_size": 65536 00:12:53.972 }, 00:12:53.972 { 00:12:53.972 "name": "BaseBdev2", 00:12:53.972 "uuid": "185b30af-0357-44e5-a152-4ce3c75cca52", 00:12:53.972 "is_configured": true, 00:12:53.972 "data_offset": 0, 00:12:53.972 "data_size": 65536 00:12:53.972 }, 00:12:53.972 { 00:12:53.972 "name": "BaseBdev3", 00:12:53.972 "uuid": "cf81b170-42b9-4084-afcd-f8067aad93d1", 00:12:53.972 "is_configured": true, 00:12:53.972 "data_offset": 0, 00:12:53.972 "data_size": 65536 00:12:53.972 } 00:12:53.972 ] 00:12:53.972 }' 00:12:53.972 20:39:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:53.972 20:39:37 -- common/autotest_common.sh@10 -- # set +x 00:12:54.539 20:39:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:54.539 20:39:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:54.539 20:39:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:54.539 20:39:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.798 20:39:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:54.798 20:39:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:54.798 20:39:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:55.057 [2024-04-15 20:39:38.367091] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.057 20:39:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:55.057 20:39:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:55.057 20:39:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.057 20:39:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:55.315 20:39:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:55.315 20:39:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.315 20:39:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:55.574 [2024-04-15 20:39:38.849859] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:55.574 [2024-04-15 20:39:38.849928] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:12:55.574 20:39:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:55.574 20:39:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:55.574 20:39:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.574 20:39:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:55.833 20:39:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:55.833 20:39:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:55.833 20:39:39 -- bdev/bdev_raid.sh@287 -- # killprocess 48986 00:12:55.833 20:39:39 -- common/autotest_common.sh@926 -- # '[' -z 48986 ']' 00:12:55.833 20:39:39 -- common/autotest_common.sh@930 -- # kill -0 48986 00:12:55.833 20:39:39 -- common/autotest_common.sh@931 -- # uname 00:12:55.833 20:39:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:55.833 20:39:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 48986 00:12:55.833 killing process with pid 48986 00:12:55.833 20:39:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:55.833 20:39:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:55.833 20:39:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 48986' 00:12:55.833 20:39:39 -- common/autotest_common.sh@945 -- # kill 48986 00:12:55.833 20:39:39 -- common/autotest_common.sh@950 -- # wait 48986 00:12:55.833 [2024-04-15 20:39:39.173604] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.833 [2024-04-15 20:39:39.173718] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:57.209 00:12:57.209 real 0m11.321s 00:12:57.209 user 0m19.008s 00:12:57.209 sys 0m1.355s 00:12:57.209 20:39:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.209 20:39:40 -- common/autotest_common.sh@10 -- # set +x 00:12:57.209 ************************************ 00:12:57.209 END TEST raid_state_function_test 00:12:57.209 ************************************ 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:12:57.209 20:39:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:57.209 20:39:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:57.209 20:39:40 -- common/autotest_common.sh@10 -- # set +x 00:12:57.209 ************************************ 00:12:57.209 START TEST raid_state_function_test_sb 00:12:57.209 ************************************ 00:12:57.209 20:39:40 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:12:57.209 Process raid pid: 49366 00:12:57.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=49366 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 49366' 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 49366 /var/tmp/spdk-raid.sock 00:12:57.209 20:39:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:57.209 20:39:40 -- common/autotest_common.sh@819 -- # '[' -z 49366 ']' 00:12:57.209 20:39:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:57.209 20:39:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:57.209 20:39:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:57.209 20:39:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:57.209 20:39:40 -- common/autotest_common.sh@10 -- # set +x 00:12:57.468 [2024-04-15 20:39:40.759466] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:12:57.468 [2024-04-15 20:39:40.759837] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.468 [2024-04-15 20:39:40.923479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.726 [2024-04-15 20:39:41.131420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.985 [2024-04-15 20:39:41.341053] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.245 20:39:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:58.245 20:39:41 -- common/autotest_common.sh@852 -- # return 0 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:58.245 [2024-04-15 20:39:41.682530] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:58.245 [2024-04-15 20:39:41.682593] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:58.245 [2024-04-15 20:39:41.682604] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:58.245 [2024-04-15 20:39:41.682622] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:58.245 [2024-04-15 20:39:41.682629] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:58.245 [2024-04-15 20:39:41.682850] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:58.245 20:39:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.503 20:39:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:58.503 "name": "Existed_Raid", 00:12:58.503 "uuid": "7767a1d8-18d9-45bf-87c1-06c448ea0a1d", 00:12:58.503 "strip_size_kb": 64, 00:12:58.503 "state": "configuring", 00:12:58.503 "raid_level": "raid0", 00:12:58.503 "superblock": true, 00:12:58.503 "num_base_bdevs": 3, 00:12:58.503 "num_base_bdevs_discovered": 0, 00:12:58.503 "num_base_bdevs_operational": 3, 00:12:58.503 "base_bdevs_list": [ 00:12:58.503 { 00:12:58.503 "name": "BaseBdev1", 00:12:58.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.503 "is_configured": false, 00:12:58.503 "data_offset": 0, 00:12:58.503 "data_size": 0 00:12:58.503 }, 00:12:58.503 { 00:12:58.503 "name": "BaseBdev2", 00:12:58.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.503 "is_configured": false, 00:12:58.503 "data_offset": 0, 00:12:58.503 "data_size": 0 00:12:58.503 }, 00:12:58.503 { 00:12:58.503 "name": "BaseBdev3", 00:12:58.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.503 "is_configured": false, 00:12:58.503 "data_offset": 0, 00:12:58.503 "data_size": 0 00:12:58.503 } 00:12:58.503 ] 00:12:58.503 }' 00:12:58.503 20:39:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:58.503 20:39:41 -- common/autotest_common.sh@10 -- # set +x 00:12:59.082 20:39:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:59.082 [2024-04-15 20:39:42.545705] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.082 [2024-04-15 20:39:42.545754] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:12:59.082 20:39:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:59.342 [2024-04-15 20:39:42.705505] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:59.342 [2024-04-15 20:39:42.705576] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:59.342 [2024-04-15 20:39:42.705587] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:59.342 [2024-04-15 20:39:42.705604] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:59.342 [2024-04-15 20:39:42.705612] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:59.342 [2024-04-15 20:39:42.705860] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:59.342 20:39:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:59.601 BaseBdev1 00:12:59.601 [2024-04-15 20:39:42.920885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:59.601 20:39:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:59.601 20:39:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:59.601 20:39:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:59.601 20:39:42 -- common/autotest_common.sh@889 -- # local i 00:12:59.601 20:39:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:59.601 20:39:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:59.601 20:39:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:59.601 20:39:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:59.859 [ 00:12:59.859 { 00:12:59.859 "name": "BaseBdev1", 00:12:59.859 "aliases": [ 00:12:59.859 "845b6a7c-cda2-4bf4-a745-dd2c79465a05" 00:12:59.859 ], 00:12:59.859 "product_name": "Malloc disk", 00:12:59.859 "block_size": 512, 00:12:59.859 "num_blocks": 65536, 00:12:59.859 "uuid": "845b6a7c-cda2-4bf4-a745-dd2c79465a05", 00:12:59.859 "assigned_rate_limits": { 00:12:59.859 "rw_ios_per_sec": 0, 00:12:59.859 "rw_mbytes_per_sec": 0, 00:12:59.859 "r_mbytes_per_sec": 0, 00:12:59.859 "w_mbytes_per_sec": 0 00:12:59.859 }, 00:12:59.859 "claimed": true, 00:12:59.859 "claim_type": "exclusive_write", 00:12:59.859 "zoned": false, 00:12:59.859 "supported_io_types": { 00:12:59.859 "read": true, 00:12:59.859 "write": true, 00:12:59.859 "unmap": true, 00:12:59.859 "write_zeroes": true, 00:12:59.859 "flush": true, 00:12:59.859 "reset": true, 00:12:59.859 "compare": false, 00:12:59.859 "compare_and_write": false, 00:12:59.859 "abort": true, 00:12:59.859 "nvme_admin": false, 00:12:59.859 "nvme_io": false 00:12:59.859 }, 00:12:59.859 "memory_domains": [ 00:12:59.859 { 00:12:59.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.859 "dma_device_type": 2 00:12:59.859 } 00:12:59.859 ], 00:12:59.859 "driver_specific": {} 00:12:59.859 } 00:12:59.859 ] 00:12:59.859 20:39:43 -- common/autotest_common.sh@895 -- # return 0 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.859 20:39:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.117 20:39:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:00.117 "name": "Existed_Raid", 00:13:00.117 "uuid": "91a28748-59fd-45a5-9a6c-897e2c9d28ac", 00:13:00.117 "strip_size_kb": 64, 00:13:00.117 "state": "configuring", 00:13:00.117 "raid_level": "raid0", 00:13:00.117 "superblock": true, 00:13:00.117 "num_base_bdevs": 3, 00:13:00.117 "num_base_bdevs_discovered": 1, 00:13:00.117 "num_base_bdevs_operational": 3, 00:13:00.117 "base_bdevs_list": [ 00:13:00.117 { 00:13:00.117 "name": "BaseBdev1", 00:13:00.117 "uuid": "845b6a7c-cda2-4bf4-a745-dd2c79465a05", 00:13:00.117 "is_configured": true, 00:13:00.117 "data_offset": 2048, 00:13:00.117 "data_size": 63488 00:13:00.117 }, 00:13:00.117 { 00:13:00.117 "name": "BaseBdev2", 00:13:00.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.117 "is_configured": false, 00:13:00.117 "data_offset": 0, 00:13:00.117 "data_size": 0 00:13:00.117 }, 00:13:00.117 { 00:13:00.117 "name": "BaseBdev3", 00:13:00.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.117 "is_configured": false, 00:13:00.117 "data_offset": 0, 00:13:00.117 "data_size": 0 00:13:00.117 } 00:13:00.117 ] 00:13:00.117 }' 00:13:00.117 20:39:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:00.117 20:39:43 -- common/autotest_common.sh@10 -- # set +x 00:13:00.682 20:39:44 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:00.682 [2024-04-15 20:39:44.155124] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:00.682 [2024-04-15 20:39:44.155186] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:13:00.682 20:39:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:00.682 20:39:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:01.247 20:39:44 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:01.247 BaseBdev1 00:13:01.247 20:39:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:01.247 20:39:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:01.247 20:39:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:01.247 20:39:44 -- common/autotest_common.sh@889 -- # local i 00:13:01.247 20:39:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:01.247 20:39:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:01.247 20:39:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:01.504 20:39:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:01.762 [ 00:13:01.762 { 00:13:01.762 "name": "BaseBdev1", 00:13:01.762 "aliases": [ 00:13:01.762 "2aba43bc-1f1f-43f8-888d-23d1eed14c55" 00:13:01.762 ], 00:13:01.762 "product_name": "Malloc disk", 00:13:01.762 "block_size": 512, 00:13:01.762 "num_blocks": 65536, 00:13:01.762 "uuid": "2aba43bc-1f1f-43f8-888d-23d1eed14c55", 00:13:01.762 "assigned_rate_limits": { 00:13:01.762 "rw_ios_per_sec": 0, 00:13:01.762 "rw_mbytes_per_sec": 0, 00:13:01.762 "r_mbytes_per_sec": 0, 00:13:01.762 "w_mbytes_per_sec": 0 00:13:01.762 }, 00:13:01.762 "claimed": false, 00:13:01.762 "zoned": false, 00:13:01.762 "supported_io_types": { 00:13:01.762 "read": true, 00:13:01.762 "write": true, 00:13:01.762 "unmap": true, 00:13:01.762 "write_zeroes": true, 00:13:01.762 "flush": true, 00:13:01.762 "reset": true, 00:13:01.762 "compare": false, 00:13:01.762 "compare_and_write": false, 00:13:01.762 "abort": true, 00:13:01.762 "nvme_admin": false, 00:13:01.762 "nvme_io": false 00:13:01.762 }, 00:13:01.762 "memory_domains": [ 00:13:01.762 { 00:13:01.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.762 "dma_device_type": 2 00:13:01.762 } 00:13:01.762 ], 00:13:01.762 "driver_specific": {} 00:13:01.762 } 00:13:01.762 ] 00:13:01.762 20:39:45 -- common/autotest_common.sh@895 -- # return 0 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:01.762 [2024-04-15 20:39:45.170855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.762 [2024-04-15 20:39:45.172295] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.762 [2024-04-15 20:39:45.172348] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.762 [2024-04-15 20:39:45.172357] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.762 [2024-04-15 20:39:45.172379] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.762 20:39:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.019 20:39:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:02.019 "name": "Existed_Raid", 00:13:02.019 "uuid": "58c15d96-aba8-407c-92b2-b7fa3dd6a2c9", 00:13:02.019 "strip_size_kb": 64, 00:13:02.019 "state": "configuring", 00:13:02.019 "raid_level": "raid0", 00:13:02.019 "superblock": true, 00:13:02.019 "num_base_bdevs": 3, 00:13:02.019 "num_base_bdevs_discovered": 1, 00:13:02.019 "num_base_bdevs_operational": 3, 00:13:02.019 "base_bdevs_list": [ 00:13:02.019 { 00:13:02.019 "name": "BaseBdev1", 00:13:02.019 "uuid": "2aba43bc-1f1f-43f8-888d-23d1eed14c55", 00:13:02.019 "is_configured": true, 00:13:02.019 "data_offset": 2048, 00:13:02.019 "data_size": 63488 00:13:02.019 }, 00:13:02.019 { 00:13:02.019 "name": "BaseBdev2", 00:13:02.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.019 "is_configured": false, 00:13:02.019 "data_offset": 0, 00:13:02.019 "data_size": 0 00:13:02.019 }, 00:13:02.019 { 00:13:02.019 "name": "BaseBdev3", 00:13:02.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.019 "is_configured": false, 00:13:02.019 "data_offset": 0, 00:13:02.019 "data_size": 0 00:13:02.019 } 00:13:02.019 ] 00:13:02.019 }' 00:13:02.020 20:39:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:02.020 20:39:45 -- common/autotest_common.sh@10 -- # set +x 00:13:02.661 20:39:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:02.661 [2024-04-15 20:39:46.108057] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.661 BaseBdev2 00:13:02.661 20:39:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:02.661 20:39:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:02.661 20:39:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:02.661 20:39:46 -- common/autotest_common.sh@889 -- # local i 00:13:02.661 20:39:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:02.661 20:39:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:02.661 20:39:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:02.919 20:39:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:03.177 [ 00:13:03.177 { 00:13:03.177 "name": "BaseBdev2", 00:13:03.177 "aliases": [ 00:13:03.177 "5de05637-9398-4186-a64a-4c542a56daa1" 00:13:03.177 ], 00:13:03.177 "product_name": "Malloc disk", 00:13:03.177 "block_size": 512, 00:13:03.177 "num_blocks": 65536, 00:13:03.177 "uuid": "5de05637-9398-4186-a64a-4c542a56daa1", 00:13:03.177 "assigned_rate_limits": { 00:13:03.177 "rw_ios_per_sec": 0, 00:13:03.177 "rw_mbytes_per_sec": 0, 00:13:03.177 "r_mbytes_per_sec": 0, 00:13:03.177 "w_mbytes_per_sec": 0 00:13:03.177 }, 00:13:03.177 "claimed": true, 00:13:03.177 "claim_type": "exclusive_write", 00:13:03.177 "zoned": false, 00:13:03.177 "supported_io_types": { 00:13:03.178 "read": true, 00:13:03.178 "write": true, 00:13:03.178 "unmap": true, 00:13:03.178 "write_zeroes": true, 00:13:03.178 "flush": true, 00:13:03.178 "reset": true, 00:13:03.178 "compare": false, 00:13:03.178 "compare_and_write": false, 00:13:03.178 "abort": true, 00:13:03.178 "nvme_admin": false, 00:13:03.178 "nvme_io": false 00:13:03.178 }, 00:13:03.178 "memory_domains": [ 00:13:03.178 { 00:13:03.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.178 "dma_device_type": 2 00:13:03.178 } 00:13:03.178 ], 00:13:03.178 "driver_specific": {} 00:13:03.178 } 00:13:03.178 ] 00:13:03.178 20:39:46 -- common/autotest_common.sh@895 -- # return 0 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.178 20:39:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.437 20:39:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:03.438 "name": "Existed_Raid", 00:13:03.438 "uuid": "58c15d96-aba8-407c-92b2-b7fa3dd6a2c9", 00:13:03.438 "strip_size_kb": 64, 00:13:03.438 "state": "configuring", 00:13:03.438 "raid_level": "raid0", 00:13:03.438 "superblock": true, 00:13:03.438 "num_base_bdevs": 3, 00:13:03.438 "num_base_bdevs_discovered": 2, 00:13:03.438 "num_base_bdevs_operational": 3, 00:13:03.438 "base_bdevs_list": [ 00:13:03.438 { 00:13:03.438 "name": "BaseBdev1", 00:13:03.438 "uuid": "2aba43bc-1f1f-43f8-888d-23d1eed14c55", 00:13:03.438 "is_configured": true, 00:13:03.438 "data_offset": 2048, 00:13:03.438 "data_size": 63488 00:13:03.438 }, 00:13:03.438 { 00:13:03.438 "name": "BaseBdev2", 00:13:03.438 "uuid": "5de05637-9398-4186-a64a-4c542a56daa1", 00:13:03.438 "is_configured": true, 00:13:03.438 "data_offset": 2048, 00:13:03.438 "data_size": 63488 00:13:03.438 }, 00:13:03.438 { 00:13:03.438 "name": "BaseBdev3", 00:13:03.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.438 "is_configured": false, 00:13:03.438 "data_offset": 0, 00:13:03.438 "data_size": 0 00:13:03.438 } 00:13:03.438 ] 00:13:03.438 }' 00:13:03.438 20:39:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:03.438 20:39:46 -- common/autotest_common.sh@10 -- # set +x 00:13:04.006 20:39:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:04.264 BaseBdev3 00:13:04.264 [2024-04-15 20:39:47.537793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.264 [2024-04-15 20:39:47.537949] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:13:04.264 [2024-04-15 20:39:47.537964] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:04.264 [2024-04-15 20:39:47.538045] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:04.264 [2024-04-15 20:39:47.538220] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:13:04.264 [2024-04-15 20:39:47.538230] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:13:04.264 [2024-04-15 20:39:47.538329] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.264 20:39:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:13:04.264 20:39:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:13:04.264 20:39:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:04.264 20:39:47 -- common/autotest_common.sh@889 -- # local i 00:13:04.264 20:39:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:04.264 20:39:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:04.264 20:39:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:04.526 20:39:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:04.526 [ 00:13:04.526 { 00:13:04.526 "name": "BaseBdev3", 00:13:04.526 "aliases": [ 00:13:04.526 "8681142a-a9d6-44fe-93e3-7022eb079184" 00:13:04.526 ], 00:13:04.526 "product_name": "Malloc disk", 00:13:04.526 "block_size": 512, 00:13:04.526 "num_blocks": 65536, 00:13:04.526 "uuid": "8681142a-a9d6-44fe-93e3-7022eb079184", 00:13:04.526 "assigned_rate_limits": { 00:13:04.526 "rw_ios_per_sec": 0, 00:13:04.526 "rw_mbytes_per_sec": 0, 00:13:04.526 "r_mbytes_per_sec": 0, 00:13:04.526 "w_mbytes_per_sec": 0 00:13:04.526 }, 00:13:04.526 "claimed": true, 00:13:04.526 "claim_type": "exclusive_write", 00:13:04.526 "zoned": false, 00:13:04.526 "supported_io_types": { 00:13:04.526 "read": true, 00:13:04.526 "write": true, 00:13:04.526 "unmap": true, 00:13:04.526 "write_zeroes": true, 00:13:04.526 "flush": true, 00:13:04.526 "reset": true, 00:13:04.526 "compare": false, 00:13:04.526 "compare_and_write": false, 00:13:04.526 "abort": true, 00:13:04.526 "nvme_admin": false, 00:13:04.526 "nvme_io": false 00:13:04.526 }, 00:13:04.526 "memory_domains": [ 00:13:04.526 { 00:13:04.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.526 "dma_device_type": 2 00:13:04.526 } 00:13:04.526 ], 00:13:04.526 "driver_specific": {} 00:13:04.526 } 00:13:04.526 ] 00:13:04.526 20:39:47 -- common/autotest_common.sh@895 -- # return 0 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.526 20:39:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.790 20:39:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:04.790 "name": "Existed_Raid", 00:13:04.790 "uuid": "58c15d96-aba8-407c-92b2-b7fa3dd6a2c9", 00:13:04.790 "strip_size_kb": 64, 00:13:04.790 "state": "online", 00:13:04.790 "raid_level": "raid0", 00:13:04.790 "superblock": true, 00:13:04.790 "num_base_bdevs": 3, 00:13:04.790 "num_base_bdevs_discovered": 3, 00:13:04.790 "num_base_bdevs_operational": 3, 00:13:04.790 "base_bdevs_list": [ 00:13:04.790 { 00:13:04.790 "name": "BaseBdev1", 00:13:04.790 "uuid": "2aba43bc-1f1f-43f8-888d-23d1eed14c55", 00:13:04.790 "is_configured": true, 00:13:04.790 "data_offset": 2048, 00:13:04.790 "data_size": 63488 00:13:04.790 }, 00:13:04.790 { 00:13:04.790 "name": "BaseBdev2", 00:13:04.790 "uuid": "5de05637-9398-4186-a64a-4c542a56daa1", 00:13:04.790 "is_configured": true, 00:13:04.790 "data_offset": 2048, 00:13:04.790 "data_size": 63488 00:13:04.790 }, 00:13:04.790 { 00:13:04.790 "name": "BaseBdev3", 00:13:04.790 "uuid": "8681142a-a9d6-44fe-93e3-7022eb079184", 00:13:04.790 "is_configured": true, 00:13:04.790 "data_offset": 2048, 00:13:04.790 "data_size": 63488 00:13:04.790 } 00:13:04.790 ] 00:13:04.790 }' 00:13:04.790 20:39:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:04.790 20:39:48 -- common/autotest_common.sh@10 -- # set +x 00:13:05.360 20:39:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:05.619 [2024-04-15 20:39:48.955884] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:05.620 [2024-04-15 20:39:48.955922] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.620 [2024-04-15 20:39:48.955961] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.620 20:39:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.878 20:39:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:05.878 "name": "Existed_Raid", 00:13:05.878 "uuid": "58c15d96-aba8-407c-92b2-b7fa3dd6a2c9", 00:13:05.878 "strip_size_kb": 64, 00:13:05.878 "state": "offline", 00:13:05.878 "raid_level": "raid0", 00:13:05.878 "superblock": true, 00:13:05.878 "num_base_bdevs": 3, 00:13:05.878 "num_base_bdevs_discovered": 2, 00:13:05.878 "num_base_bdevs_operational": 2, 00:13:05.878 "base_bdevs_list": [ 00:13:05.878 { 00:13:05.878 "name": null, 00:13:05.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.878 "is_configured": false, 00:13:05.878 "data_offset": 2048, 00:13:05.878 "data_size": 63488 00:13:05.878 }, 00:13:05.878 { 00:13:05.878 "name": "BaseBdev2", 00:13:05.878 "uuid": "5de05637-9398-4186-a64a-4c542a56daa1", 00:13:05.878 "is_configured": true, 00:13:05.878 "data_offset": 2048, 00:13:05.878 "data_size": 63488 00:13:05.878 }, 00:13:05.878 { 00:13:05.878 "name": "BaseBdev3", 00:13:05.878 "uuid": "8681142a-a9d6-44fe-93e3-7022eb079184", 00:13:05.878 "is_configured": true, 00:13:05.878 "data_offset": 2048, 00:13:05.878 "data_size": 63488 00:13:05.878 } 00:13:05.878 ] 00:13:05.878 }' 00:13:05.878 20:39:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:05.878 20:39:49 -- common/autotest_common.sh@10 -- # set +x 00:13:06.445 20:39:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:06.445 20:39:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:06.445 20:39:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.445 20:39:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:06.702 20:39:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:06.702 20:39:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:06.702 20:39:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:06.960 [2024-04-15 20:39:50.235863] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:06.960 20:39:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:06.960 20:39:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:06.960 20:39:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:06.960 20:39:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.216 20:39:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:07.216 20:39:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:07.216 20:39:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:07.474 [2024-04-15 20:39:50.941853] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:07.474 [2024-04-15 20:39:50.941935] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:13:07.733 20:39:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:07.733 20:39:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:07.733 20:39:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.733 20:39:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:07.733 20:39:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:07.733 20:39:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:07.733 20:39:51 -- bdev/bdev_raid.sh@287 -- # killprocess 49366 00:13:07.733 20:39:51 -- common/autotest_common.sh@926 -- # '[' -z 49366 ']' 00:13:07.733 20:39:51 -- common/autotest_common.sh@930 -- # kill -0 49366 00:13:07.733 20:39:51 -- common/autotest_common.sh@931 -- # uname 00:13:07.733 20:39:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:07.733 20:39:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 49366 00:13:07.992 killing process with pid 49366 00:13:07.992 20:39:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:07.992 20:39:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:07.992 20:39:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49366' 00:13:07.992 20:39:51 -- common/autotest_common.sh@945 -- # kill 49366 00:13:07.992 20:39:51 -- common/autotest_common.sh@950 -- # wait 49366 00:13:07.992 [2024-04-15 20:39:51.240740] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.992 [2024-04-15 20:39:51.240848] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.371 ************************************ 00:13:09.371 END TEST raid_state_function_test_sb 00:13:09.371 ************************************ 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:09.371 00:13:09.371 real 0m11.894s 00:13:09.371 user 0m20.389s 00:13:09.371 sys 0m1.507s 00:13:09.371 20:39:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.371 20:39:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:13:09.371 20:39:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:09.371 20:39:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.371 20:39:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.371 ************************************ 00:13:09.371 START TEST raid_superblock_test 00:13:09.371 ************************************ 00:13:09.371 20:39:52 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@357 -- # raid_pid=49747 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@358 -- # waitforlisten 49747 /var/tmp/spdk-raid.sock 00:13:09.371 20:39:52 -- common/autotest_common.sh@819 -- # '[' -z 49747 ']' 00:13:09.371 20:39:52 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:09.371 20:39:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:09.371 20:39:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:09.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:09.371 20:39:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:09.371 20:39:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:09.371 20:39:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.371 [2024-04-15 20:39:52.716919] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:09.371 [2024-04-15 20:39:52.717100] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49747 ] 00:13:09.630 [2024-04-15 20:39:52.873392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.630 [2024-04-15 20:39:53.071038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.889 [2024-04-15 20:39:53.266317] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.824 20:39:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:10.824 20:39:54 -- common/autotest_common.sh@852 -- # return 0 00:13:10.824 20:39:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:13:10.824 20:39:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:10.824 20:39:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:13:10.824 20:39:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:13:10.824 20:39:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:10.824 20:39:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.824 20:39:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.824 20:39:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.824 20:39:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:11.083 malloc1 00:13:11.083 20:39:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:11.342 [2024-04-15 20:39:54.590836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:11.342 [2024-04-15 20:39:54.590931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.342 [2024-04-15 20:39:54.591003] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:13:11.342 [2024-04-15 20:39:54.591052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.342 [2024-04-15 20:39:54.592704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.342 [2024-04-15 20:39:54.592743] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:11.342 pt1 00:13:11.342 20:39:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:11.342 20:39:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:11.342 20:39:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:13:11.342 20:39:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:13:11.342 20:39:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:11.342 20:39:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.342 20:39:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.342 20:39:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.342 20:39:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:11.342 malloc2 00:13:11.342 20:39:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:11.601 [2024-04-15 20:39:54.976074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:11.601 [2024-04-15 20:39:54.976155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.601 [2024-04-15 20:39:54.976191] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:13:11.601 [2024-04-15 20:39:54.976227] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.601 pt2 00:13:11.601 [2024-04-15 20:39:54.977971] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.601 [2024-04-15 20:39:54.978026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:11.601 20:39:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:11.601 20:39:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:11.601 20:39:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:13:11.601 20:39:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:13:11.601 20:39:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:11.601 20:39:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:11.601 20:39:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:11.601 20:39:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:11.601 20:39:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:11.860 malloc3 00:13:11.860 20:39:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:12.118 [2024-04-15 20:39:55.407617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:12.118 [2024-04-15 20:39:55.407713] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.118 [2024-04-15 20:39:55.407774] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:13:12.118 [2024-04-15 20:39:55.407811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.118 [2024-04-15 20:39:55.409454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.118 [2024-04-15 20:39:55.409506] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:12.118 pt3 00:13:12.118 20:39:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:12.118 20:39:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:12.118 20:39:55 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:12.378 [2024-04-15 20:39:55.635361] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:12.378 [2024-04-15 20:39:55.636768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.378 [2024-04-15 20:39:55.636811] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:12.378 [2024-04-15 20:39:55.636901] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002c180 00:13:12.378 [2024-04-15 20:39:55.636910] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:12.378 [2024-04-15 20:39:55.637003] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:13:12.378 [2024-04-15 20:39:55.637260] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002c180 00:13:12.378 [2024-04-15 20:39:55.637270] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002c180 00:13:12.378 [2024-04-15 20:39:55.637361] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:12.378 "name": "raid_bdev1", 00:13:12.378 "uuid": "6a6ab36c-7012-4676-b866-227ebafbf0d3", 00:13:12.378 "strip_size_kb": 64, 00:13:12.378 "state": "online", 00:13:12.378 "raid_level": "raid0", 00:13:12.378 "superblock": true, 00:13:12.378 "num_base_bdevs": 3, 00:13:12.378 "num_base_bdevs_discovered": 3, 00:13:12.378 "num_base_bdevs_operational": 3, 00:13:12.378 "base_bdevs_list": [ 00:13:12.378 { 00:13:12.378 "name": "pt1", 00:13:12.378 "uuid": "e4568b51-e649-59d1-a856-4aa9fa48124a", 00:13:12.378 "is_configured": true, 00:13:12.378 "data_offset": 2048, 00:13:12.378 "data_size": 63488 00:13:12.378 }, 00:13:12.378 { 00:13:12.378 "name": "pt2", 00:13:12.378 "uuid": "20c45074-4505-55e0-a1dd-3e43faa20406", 00:13:12.378 "is_configured": true, 00:13:12.378 "data_offset": 2048, 00:13:12.378 "data_size": 63488 00:13:12.378 }, 00:13:12.378 { 00:13:12.378 "name": "pt3", 00:13:12.378 "uuid": "c9964919-76c9-54a4-9ecc-658f5575693e", 00:13:12.378 "is_configured": true, 00:13:12.378 "data_offset": 2048, 00:13:12.378 "data_size": 63488 00:13:12.378 } 00:13:12.378 ] 00:13:12.378 }' 00:13:12.378 20:39:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:12.378 20:39:55 -- common/autotest_common.sh@10 -- # set +x 00:13:12.947 20:39:56 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:12.947 20:39:56 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:13.205 [2024-04-15 20:39:56.581964] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.205 20:39:56 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=6a6ab36c-7012-4676-b866-227ebafbf0d3 00:13:13.205 20:39:56 -- bdev/bdev_raid.sh@380 -- # '[' -z 6a6ab36c-7012-4676-b866-227ebafbf0d3 ']' 00:13:13.205 20:39:56 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:13.464 [2024-04-15 20:39:56.781543] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.464 [2024-04-15 20:39:56.781577] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.464 [2024-04-15 20:39:56.781636] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.464 [2024-04-15 20:39:56.781886] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.464 [2024-04-15 20:39:56.781901] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c180 name raid_bdev1, state offline 00:13:13.464 20:39:56 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.464 20:39:56 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:13.464 20:39:56 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:13.464 20:39:56 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:13.464 20:39:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:13.464 20:39:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:13.722 20:39:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:13.722 20:39:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:13.981 20:39:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:13.981 20:39:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:13.981 20:39:57 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:13.981 20:39:57 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:14.239 20:39:57 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:13:14.239 20:39:57 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:14.239 20:39:57 -- common/autotest_common.sh@640 -- # local es=0 00:13:14.239 20:39:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:14.239 20:39:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.239 20:39:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:14.239 20:39:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.239 20:39:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:14.239 20:39:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.239 20:39:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:14.239 20:39:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:14.240 20:39:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:14.240 20:39:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:14.499 [2024-04-15 20:39:57.819973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:14.499 [2024-04-15 20:39:57.821369] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:14.499 [2024-04-15 20:39:57.821402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:14.499 [2024-04-15 20:39:57.821428] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:13:14.499 [2024-04-15 20:39:57.821484] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:13:14.499 [2024-04-15 20:39:57.821514] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:13:14.499 [2024-04-15 20:39:57.821550] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:14.499 [2024-04-15 20:39:57.821560] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c780 name raid_bdev1, state configuring 00:13:14.499 request: 00:13:14.499 { 00:13:14.499 "name": "raid_bdev1", 00:13:14.499 "raid_level": "raid0", 00:13:14.499 "base_bdevs": [ 00:13:14.499 "malloc1", 00:13:14.499 "malloc2", 00:13:14.499 "malloc3" 00:13:14.499 ], 00:13:14.499 "superblock": false, 00:13:14.499 "strip_size_kb": 64, 00:13:14.499 "method": "bdev_raid_create", 00:13:14.499 "req_id": 1 00:13:14.499 } 00:13:14.499 Got JSON-RPC error response 00:13:14.499 response: 00:13:14.499 { 00:13:14.499 "code": -17, 00:13:14.499 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:14.499 } 00:13:14.499 20:39:57 -- common/autotest_common.sh@643 -- # es=1 00:13:14.499 20:39:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:14.499 20:39:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:14.499 20:39:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:14.499 20:39:57 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.499 20:39:57 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:14.771 [2024-04-15 20:39:58.191375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:14.771 [2024-04-15 20:39:58.191470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.771 [2024-04-15 20:39:58.191513] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:13:14.771 [2024-04-15 20:39:58.191538] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.771 [2024-04-15 20:39:58.193245] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.771 [2024-04-15 20:39:58.193284] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:14.771 [2024-04-15 20:39:58.193499] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:14.771 [2024-04-15 20:39:58.193560] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:14.771 pt1 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.771 20:39:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.030 20:39:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:15.030 "name": "raid_bdev1", 00:13:15.030 "uuid": "6a6ab36c-7012-4676-b866-227ebafbf0d3", 00:13:15.030 "strip_size_kb": 64, 00:13:15.030 "state": "configuring", 00:13:15.030 "raid_level": "raid0", 00:13:15.030 "superblock": true, 00:13:15.030 "num_base_bdevs": 3, 00:13:15.030 "num_base_bdevs_discovered": 1, 00:13:15.030 "num_base_bdevs_operational": 3, 00:13:15.030 "base_bdevs_list": [ 00:13:15.030 { 00:13:15.030 "name": "pt1", 00:13:15.030 "uuid": "e4568b51-e649-59d1-a856-4aa9fa48124a", 00:13:15.030 "is_configured": true, 00:13:15.030 "data_offset": 2048, 00:13:15.030 "data_size": 63488 00:13:15.030 }, 00:13:15.030 { 00:13:15.030 "name": null, 00:13:15.030 "uuid": "20c45074-4505-55e0-a1dd-3e43faa20406", 00:13:15.030 "is_configured": false, 00:13:15.030 "data_offset": 2048, 00:13:15.030 "data_size": 63488 00:13:15.030 }, 00:13:15.030 { 00:13:15.030 "name": null, 00:13:15.030 "uuid": "c9964919-76c9-54a4-9ecc-658f5575693e", 00:13:15.030 "is_configured": false, 00:13:15.030 "data_offset": 2048, 00:13:15.030 "data_size": 63488 00:13:15.030 } 00:13:15.030 ] 00:13:15.030 }' 00:13:15.030 20:39:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:15.030 20:39:58 -- common/autotest_common.sh@10 -- # set +x 00:13:15.598 20:39:58 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:13:15.598 20:39:58 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:15.857 [2024-04-15 20:39:59.117960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:15.857 [2024-04-15 20:39:59.118036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.857 [2024-04-15 20:39:59.118081] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f480 00:13:15.857 [2024-04-15 20:39:59.118102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.857 [2024-04-15 20:39:59.118398] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.857 [2024-04-15 20:39:59.118422] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:15.857 [2024-04-15 20:39:59.118508] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:15.857 [2024-04-15 20:39:59.118526] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:15.857 pt2 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:15.857 [2024-04-15 20:39:59.289737] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.857 20:39:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.116 20:39:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:16.116 "name": "raid_bdev1", 00:13:16.116 "uuid": "6a6ab36c-7012-4676-b866-227ebafbf0d3", 00:13:16.116 "strip_size_kb": 64, 00:13:16.116 "state": "configuring", 00:13:16.116 "raid_level": "raid0", 00:13:16.116 "superblock": true, 00:13:16.116 "num_base_bdevs": 3, 00:13:16.116 "num_base_bdevs_discovered": 1, 00:13:16.116 "num_base_bdevs_operational": 3, 00:13:16.116 "base_bdevs_list": [ 00:13:16.116 { 00:13:16.116 "name": "pt1", 00:13:16.116 "uuid": "e4568b51-e649-59d1-a856-4aa9fa48124a", 00:13:16.116 "is_configured": true, 00:13:16.116 "data_offset": 2048, 00:13:16.116 "data_size": 63488 00:13:16.116 }, 00:13:16.116 { 00:13:16.116 "name": null, 00:13:16.116 "uuid": "20c45074-4505-55e0-a1dd-3e43faa20406", 00:13:16.116 "is_configured": false, 00:13:16.116 "data_offset": 2048, 00:13:16.116 "data_size": 63488 00:13:16.116 }, 00:13:16.116 { 00:13:16.116 "name": null, 00:13:16.116 "uuid": "c9964919-76c9-54a4-9ecc-658f5575693e", 00:13:16.116 "is_configured": false, 00:13:16.116 "data_offset": 2048, 00:13:16.116 "data_size": 63488 00:13:16.116 } 00:13:16.116 ] 00:13:16.116 }' 00:13:16.116 20:39:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:16.116 20:39:59 -- common/autotest_common.sh@10 -- # set +x 00:13:16.684 20:40:00 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:16.684 20:40:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:16.684 20:40:00 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:16.943 [2024-04-15 20:40:00.308161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:16.943 [2024-04-15 20:40:00.308241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.943 [2024-04-15 20:40:00.308281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030c80 00:13:16.943 [2024-04-15 20:40:00.308308] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.943 [2024-04-15 20:40:00.308602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.943 [2024-04-15 20:40:00.308629] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:16.943 pt2 00:13:16.943 [2024-04-15 20:40:00.309109] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:16.943 [2024-04-15 20:40:00.309206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:16.943 20:40:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:16.943 20:40:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:16.943 20:40:00 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:17.202 [2024-04-15 20:40:00.475930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:17.202 [2024-04-15 20:40:00.476007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.202 [2024-04-15 20:40:00.476041] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032180 00:13:17.202 [2024-04-15 20:40:00.476066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.202 [2024-04-15 20:40:00.476329] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.202 [2024-04-15 20:40:00.476358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:17.202 [2024-04-15 20:40:00.476453] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:17.202 [2024-04-15 20:40:00.476470] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:17.202 [2024-04-15 20:40:00.476528] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002ee80 00:13:17.202 [2024-04-15 20:40:00.476537] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:17.202 [2024-04-15 20:40:00.476611] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:17.202 [2024-04-15 20:40:00.477064] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002ee80 00:13:17.202 [2024-04-15 20:40:00.477084] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002ee80 00:13:17.202 [2024-04-15 20:40:00.477185] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.202 pt3 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.202 20:40:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:17.202 "name": "raid_bdev1", 00:13:17.202 "uuid": "6a6ab36c-7012-4676-b866-227ebafbf0d3", 00:13:17.202 "strip_size_kb": 64, 00:13:17.202 "state": "online", 00:13:17.202 "raid_level": "raid0", 00:13:17.202 "superblock": true, 00:13:17.202 "num_base_bdevs": 3, 00:13:17.202 "num_base_bdevs_discovered": 3, 00:13:17.202 "num_base_bdevs_operational": 3, 00:13:17.202 "base_bdevs_list": [ 00:13:17.202 { 00:13:17.202 "name": "pt1", 00:13:17.202 "uuid": "e4568b51-e649-59d1-a856-4aa9fa48124a", 00:13:17.202 "is_configured": true, 00:13:17.202 "data_offset": 2048, 00:13:17.202 "data_size": 63488 00:13:17.202 }, 00:13:17.202 { 00:13:17.202 "name": "pt2", 00:13:17.202 "uuid": "20c45074-4505-55e0-a1dd-3e43faa20406", 00:13:17.202 "is_configured": true, 00:13:17.202 "data_offset": 2048, 00:13:17.202 "data_size": 63488 00:13:17.202 }, 00:13:17.202 { 00:13:17.202 "name": "pt3", 00:13:17.202 "uuid": "c9964919-76c9-54a4-9ecc-658f5575693e", 00:13:17.203 "is_configured": true, 00:13:17.203 "data_offset": 2048, 00:13:17.203 "data_size": 63488 00:13:17.203 } 00:13:17.203 ] 00:13:17.203 }' 00:13:17.203 20:40:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:17.203 20:40:00 -- common/autotest_common.sh@10 -- # set +x 00:13:17.770 20:40:01 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:17.770 20:40:01 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:18.029 [2024-04-15 20:40:01.410632] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.029 20:40:01 -- bdev/bdev_raid.sh@430 -- # '[' 6a6ab36c-7012-4676-b866-227ebafbf0d3 '!=' 6a6ab36c-7012-4676-b866-227ebafbf0d3 ']' 00:13:18.029 20:40:01 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:13:18.029 20:40:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:18.029 20:40:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:18.029 20:40:01 -- bdev/bdev_raid.sh@511 -- # killprocess 49747 00:13:18.029 20:40:01 -- common/autotest_common.sh@926 -- # '[' -z 49747 ']' 00:13:18.029 20:40:01 -- common/autotest_common.sh@930 -- # kill -0 49747 00:13:18.029 20:40:01 -- common/autotest_common.sh@931 -- # uname 00:13:18.029 20:40:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:18.029 20:40:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 49747 00:13:18.029 20:40:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:18.029 20:40:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:18.029 20:40:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 49747' 00:13:18.029 killing process with pid 49747 00:13:18.029 20:40:01 -- common/autotest_common.sh@945 -- # kill 49747 00:13:18.029 20:40:01 -- common/autotest_common.sh@950 -- # wait 49747 00:13:18.029 [2024-04-15 20:40:01.462862] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.029 [2024-04-15 20:40:01.462926] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.029 [2024-04-15 20:40:01.462961] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.029 [2024-04-15 20:40:01.462969] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002ee80 name raid_bdev1, state offline 00:13:18.287 [2024-04-15 20:40:01.750436] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:19.665 00:13:19.665 real 0m10.479s 00:13:19.665 user 0m17.310s 00:13:19.665 sys 0m1.255s 00:13:19.665 20:40:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.665 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:13:19.665 ************************************ 00:13:19.665 END TEST raid_superblock_test 00:13:19.665 ************************************ 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:13:19.665 20:40:03 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:19.665 20:40:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.665 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:13:19.665 ************************************ 00:13:19.665 START TEST raid_state_function_test 00:13:19.665 ************************************ 00:13:19.665 20:40:03 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:19.665 Process raid pid: 50057 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@226 -- # raid_pid=50057 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50057' 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50057 /var/tmp/spdk-raid.sock 00:13:19.665 20:40:03 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:19.665 20:40:03 -- common/autotest_common.sh@819 -- # '[' -z 50057 ']' 00:13:19.665 20:40:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:19.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:19.665 20:40:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:19.665 20:40:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:19.665 20:40:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:19.665 20:40:03 -- common/autotest_common.sh@10 -- # set +x 00:13:19.923 [2024-04-15 20:40:03.279806] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:19.923 [2024-04-15 20:40:03.279940] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.182 [2024-04-15 20:40:03.423051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.182 [2024-04-15 20:40:03.612323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.441 [2024-04-15 20:40:03.808985] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.699 20:40:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:20.699 20:40:03 -- common/autotest_common.sh@852 -- # return 0 00:13:20.699 20:40:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:20.699 [2024-04-15 20:40:04.112958] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:20.699 [2024-04-15 20:40:04.113020] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:20.699 [2024-04-15 20:40:04.113030] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:20.699 [2024-04-15 20:40:04.113054] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:20.699 [2024-04-15 20:40:04.113062] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:20.699 [2024-04-15 20:40:04.113096] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.699 20:40:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.957 20:40:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:20.957 "name": "Existed_Raid", 00:13:20.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.958 "strip_size_kb": 64, 00:13:20.958 "state": "configuring", 00:13:20.958 "raid_level": "concat", 00:13:20.958 "superblock": false, 00:13:20.958 "num_base_bdevs": 3, 00:13:20.958 "num_base_bdevs_discovered": 0, 00:13:20.958 "num_base_bdevs_operational": 3, 00:13:20.958 "base_bdevs_list": [ 00:13:20.958 { 00:13:20.958 "name": "BaseBdev1", 00:13:20.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.958 "is_configured": false, 00:13:20.958 "data_offset": 0, 00:13:20.958 "data_size": 0 00:13:20.958 }, 00:13:20.958 { 00:13:20.958 "name": "BaseBdev2", 00:13:20.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.958 "is_configured": false, 00:13:20.958 "data_offset": 0, 00:13:20.958 "data_size": 0 00:13:20.958 }, 00:13:20.958 { 00:13:20.958 "name": "BaseBdev3", 00:13:20.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.958 "is_configured": false, 00:13:20.958 "data_offset": 0, 00:13:20.958 "data_size": 0 00:13:20.958 } 00:13:20.958 ] 00:13:20.958 }' 00:13:20.958 20:40:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:20.958 20:40:04 -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 20:40:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:21.527 [2024-04-15 20:40:04.923741] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.527 [2024-04-15 20:40:04.923780] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:21.527 20:40:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:21.785 [2024-04-15 20:40:05.071528] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.785 [2024-04-15 20:40:05.071591] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.785 [2024-04-15 20:40:05.071601] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.785 [2024-04-15 20:40:05.071617] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.785 [2024-04-15 20:40:05.071624] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.785 [2024-04-15 20:40:05.071829] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.785 20:40:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.785 [2024-04-15 20:40:05.283175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.785 BaseBdev1 00:13:22.045 20:40:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:22.045 20:40:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:22.045 20:40:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:22.045 20:40:05 -- common/autotest_common.sh@889 -- # local i 00:13:22.045 20:40:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:22.045 20:40:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:22.045 20:40:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:22.045 20:40:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:22.304 [ 00:13:22.304 { 00:13:22.304 "name": "BaseBdev1", 00:13:22.304 "aliases": [ 00:13:22.304 "d365731e-f0fc-469e-bbd3-0a5df585815e" 00:13:22.304 ], 00:13:22.304 "product_name": "Malloc disk", 00:13:22.304 "block_size": 512, 00:13:22.304 "num_blocks": 65536, 00:13:22.304 "uuid": "d365731e-f0fc-469e-bbd3-0a5df585815e", 00:13:22.304 "assigned_rate_limits": { 00:13:22.304 "rw_ios_per_sec": 0, 00:13:22.304 "rw_mbytes_per_sec": 0, 00:13:22.304 "r_mbytes_per_sec": 0, 00:13:22.304 "w_mbytes_per_sec": 0 00:13:22.304 }, 00:13:22.304 "claimed": true, 00:13:22.304 "claim_type": "exclusive_write", 00:13:22.304 "zoned": false, 00:13:22.304 "supported_io_types": { 00:13:22.304 "read": true, 00:13:22.304 "write": true, 00:13:22.304 "unmap": true, 00:13:22.304 "write_zeroes": true, 00:13:22.304 "flush": true, 00:13:22.304 "reset": true, 00:13:22.304 "compare": false, 00:13:22.304 "compare_and_write": false, 00:13:22.304 "abort": true, 00:13:22.304 "nvme_admin": false, 00:13:22.304 "nvme_io": false 00:13:22.304 }, 00:13:22.304 "memory_domains": [ 00:13:22.304 { 00:13:22.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.304 "dma_device_type": 2 00:13:22.304 } 00:13:22.304 ], 00:13:22.304 "driver_specific": {} 00:13:22.304 } 00:13:22.305 ] 00:13:22.305 20:40:05 -- common/autotest_common.sh@895 -- # return 0 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:22.305 "name": "Existed_Raid", 00:13:22.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.305 "strip_size_kb": 64, 00:13:22.305 "state": "configuring", 00:13:22.305 "raid_level": "concat", 00:13:22.305 "superblock": false, 00:13:22.305 "num_base_bdevs": 3, 00:13:22.305 "num_base_bdevs_discovered": 1, 00:13:22.305 "num_base_bdevs_operational": 3, 00:13:22.305 "base_bdevs_list": [ 00:13:22.305 { 00:13:22.305 "name": "BaseBdev1", 00:13:22.305 "uuid": "d365731e-f0fc-469e-bbd3-0a5df585815e", 00:13:22.305 "is_configured": true, 00:13:22.305 "data_offset": 0, 00:13:22.305 "data_size": 65536 00:13:22.305 }, 00:13:22.305 { 00:13:22.305 "name": "BaseBdev2", 00:13:22.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.305 "is_configured": false, 00:13:22.305 "data_offset": 0, 00:13:22.305 "data_size": 0 00:13:22.305 }, 00:13:22.305 { 00:13:22.305 "name": "BaseBdev3", 00:13:22.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.305 "is_configured": false, 00:13:22.305 "data_offset": 0, 00:13:22.305 "data_size": 0 00:13:22.305 } 00:13:22.305 ] 00:13:22.305 }' 00:13:22.305 20:40:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:22.305 20:40:05 -- common/autotest_common.sh@10 -- # set +x 00:13:22.929 20:40:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:22.929 [2024-04-15 20:40:06.425472] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.929 [2024-04-15 20:40:06.425519] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:23.188 [2024-04-15 20:40:06.589265] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:23.188 [2024-04-15 20:40:06.590687] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:23.188 [2024-04-15 20:40:06.590742] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:23.188 [2024-04-15 20:40:06.590752] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:23.188 [2024-04-15 20:40:06.590776] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.188 20:40:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.447 20:40:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:23.447 "name": "Existed_Raid", 00:13:23.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.447 "strip_size_kb": 64, 00:13:23.447 "state": "configuring", 00:13:23.447 "raid_level": "concat", 00:13:23.447 "superblock": false, 00:13:23.447 "num_base_bdevs": 3, 00:13:23.447 "num_base_bdevs_discovered": 1, 00:13:23.447 "num_base_bdevs_operational": 3, 00:13:23.447 "base_bdevs_list": [ 00:13:23.447 { 00:13:23.447 "name": "BaseBdev1", 00:13:23.447 "uuid": "d365731e-f0fc-469e-bbd3-0a5df585815e", 00:13:23.447 "is_configured": true, 00:13:23.447 "data_offset": 0, 00:13:23.447 "data_size": 65536 00:13:23.447 }, 00:13:23.447 { 00:13:23.447 "name": "BaseBdev2", 00:13:23.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.447 "is_configured": false, 00:13:23.447 "data_offset": 0, 00:13:23.447 "data_size": 0 00:13:23.447 }, 00:13:23.447 { 00:13:23.447 "name": "BaseBdev3", 00:13:23.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.447 "is_configured": false, 00:13:23.447 "data_offset": 0, 00:13:23.447 "data_size": 0 00:13:23.447 } 00:13:23.447 ] 00:13:23.447 }' 00:13:23.447 20:40:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:23.447 20:40:06 -- common/autotest_common.sh@10 -- # set +x 00:13:24.015 20:40:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:24.272 [2024-04-15 20:40:07.669520] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:24.272 BaseBdev2 00:13:24.272 20:40:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:24.272 20:40:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:24.272 20:40:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:24.272 20:40:07 -- common/autotest_common.sh@889 -- # local i 00:13:24.272 20:40:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:24.272 20:40:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:24.272 20:40:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:24.530 20:40:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:24.788 [ 00:13:24.788 { 00:13:24.788 "name": "BaseBdev2", 00:13:24.788 "aliases": [ 00:13:24.788 "64e2ef83-82a9-425a-8220-f2f71ec5429f" 00:13:24.788 ], 00:13:24.788 "product_name": "Malloc disk", 00:13:24.788 "block_size": 512, 00:13:24.788 "num_blocks": 65536, 00:13:24.788 "uuid": "64e2ef83-82a9-425a-8220-f2f71ec5429f", 00:13:24.788 "assigned_rate_limits": { 00:13:24.788 "rw_ios_per_sec": 0, 00:13:24.788 "rw_mbytes_per_sec": 0, 00:13:24.788 "r_mbytes_per_sec": 0, 00:13:24.788 "w_mbytes_per_sec": 0 00:13:24.788 }, 00:13:24.788 "claimed": true, 00:13:24.788 "claim_type": "exclusive_write", 00:13:24.788 "zoned": false, 00:13:24.788 "supported_io_types": { 00:13:24.788 "read": true, 00:13:24.788 "write": true, 00:13:24.788 "unmap": true, 00:13:24.788 "write_zeroes": true, 00:13:24.788 "flush": true, 00:13:24.788 "reset": true, 00:13:24.788 "compare": false, 00:13:24.788 "compare_and_write": false, 00:13:24.788 "abort": true, 00:13:24.788 "nvme_admin": false, 00:13:24.788 "nvme_io": false 00:13:24.788 }, 00:13:24.788 "memory_domains": [ 00:13:24.788 { 00:13:24.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.788 "dma_device_type": 2 00:13:24.788 } 00:13:24.788 ], 00:13:24.788 "driver_specific": {} 00:13:24.788 } 00:13:24.788 ] 00:13:24.788 20:40:08 -- common/autotest_common.sh@895 -- # return 0 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.789 20:40:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.047 20:40:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:25.047 "name": "Existed_Raid", 00:13:25.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.047 "strip_size_kb": 64, 00:13:25.047 "state": "configuring", 00:13:25.047 "raid_level": "concat", 00:13:25.047 "superblock": false, 00:13:25.047 "num_base_bdevs": 3, 00:13:25.047 "num_base_bdevs_discovered": 2, 00:13:25.047 "num_base_bdevs_operational": 3, 00:13:25.047 "base_bdevs_list": [ 00:13:25.047 { 00:13:25.047 "name": "BaseBdev1", 00:13:25.047 "uuid": "d365731e-f0fc-469e-bbd3-0a5df585815e", 00:13:25.047 "is_configured": true, 00:13:25.047 "data_offset": 0, 00:13:25.047 "data_size": 65536 00:13:25.047 }, 00:13:25.047 { 00:13:25.047 "name": "BaseBdev2", 00:13:25.047 "uuid": "64e2ef83-82a9-425a-8220-f2f71ec5429f", 00:13:25.047 "is_configured": true, 00:13:25.047 "data_offset": 0, 00:13:25.047 "data_size": 65536 00:13:25.047 }, 00:13:25.047 { 00:13:25.047 "name": "BaseBdev3", 00:13:25.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.047 "is_configured": false, 00:13:25.047 "data_offset": 0, 00:13:25.047 "data_size": 0 00:13:25.047 } 00:13:25.047 ] 00:13:25.047 }' 00:13:25.047 20:40:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:25.047 20:40:08 -- common/autotest_common.sh@10 -- # set +x 00:13:25.614 20:40:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:25.872 [2024-04-15 20:40:09.247865] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.872 [2024-04-15 20:40:09.247907] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:13:25.872 [2024-04-15 20:40:09.247916] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:25.872 [2024-04-15 20:40:09.248017] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:25.872 [2024-04-15 20:40:09.248217] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:13:25.872 [2024-04-15 20:40:09.248227] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:13:25.872 [2024-04-15 20:40:09.248410] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.872 BaseBdev3 00:13:25.872 20:40:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:13:25.872 20:40:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:13:25.872 20:40:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:25.872 20:40:09 -- common/autotest_common.sh@889 -- # local i 00:13:25.872 20:40:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:25.872 20:40:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:25.872 20:40:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:26.130 20:40:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:26.389 [ 00:13:26.389 { 00:13:26.389 "name": "BaseBdev3", 00:13:26.389 "aliases": [ 00:13:26.389 "d9bcf160-8b9b-44b3-a348-9a044c61af08" 00:13:26.389 ], 00:13:26.389 "product_name": "Malloc disk", 00:13:26.389 "block_size": 512, 00:13:26.389 "num_blocks": 65536, 00:13:26.389 "uuid": "d9bcf160-8b9b-44b3-a348-9a044c61af08", 00:13:26.389 "assigned_rate_limits": { 00:13:26.389 "rw_ios_per_sec": 0, 00:13:26.389 "rw_mbytes_per_sec": 0, 00:13:26.389 "r_mbytes_per_sec": 0, 00:13:26.389 "w_mbytes_per_sec": 0 00:13:26.389 }, 00:13:26.389 "claimed": true, 00:13:26.389 "claim_type": "exclusive_write", 00:13:26.389 "zoned": false, 00:13:26.389 "supported_io_types": { 00:13:26.389 "read": true, 00:13:26.389 "write": true, 00:13:26.389 "unmap": true, 00:13:26.390 "write_zeroes": true, 00:13:26.390 "flush": true, 00:13:26.390 "reset": true, 00:13:26.390 "compare": false, 00:13:26.390 "compare_and_write": false, 00:13:26.390 "abort": true, 00:13:26.390 "nvme_admin": false, 00:13:26.390 "nvme_io": false 00:13:26.390 }, 00:13:26.390 "memory_domains": [ 00:13:26.390 { 00:13:26.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.390 "dma_device_type": 2 00:13:26.390 } 00:13:26.390 ], 00:13:26.390 "driver_specific": {} 00:13:26.390 } 00:13:26.390 ] 00:13:26.390 20:40:09 -- common/autotest_common.sh@895 -- # return 0 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.390 20:40:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.666 20:40:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:26.666 "name": "Existed_Raid", 00:13:26.666 "uuid": "4e903e77-08c6-43a2-a3fc-03f5428f68c2", 00:13:26.666 "strip_size_kb": 64, 00:13:26.666 "state": "online", 00:13:26.666 "raid_level": "concat", 00:13:26.666 "superblock": false, 00:13:26.666 "num_base_bdevs": 3, 00:13:26.666 "num_base_bdevs_discovered": 3, 00:13:26.666 "num_base_bdevs_operational": 3, 00:13:26.666 "base_bdevs_list": [ 00:13:26.666 { 00:13:26.666 "name": "BaseBdev1", 00:13:26.666 "uuid": "d365731e-f0fc-469e-bbd3-0a5df585815e", 00:13:26.666 "is_configured": true, 00:13:26.666 "data_offset": 0, 00:13:26.666 "data_size": 65536 00:13:26.666 }, 00:13:26.667 { 00:13:26.667 "name": "BaseBdev2", 00:13:26.667 "uuid": "64e2ef83-82a9-425a-8220-f2f71ec5429f", 00:13:26.667 "is_configured": true, 00:13:26.667 "data_offset": 0, 00:13:26.667 "data_size": 65536 00:13:26.667 }, 00:13:26.667 { 00:13:26.667 "name": "BaseBdev3", 00:13:26.667 "uuid": "d9bcf160-8b9b-44b3-a348-9a044c61af08", 00:13:26.667 "is_configured": true, 00:13:26.667 "data_offset": 0, 00:13:26.667 "data_size": 65536 00:13:26.667 } 00:13:26.667 ] 00:13:26.667 }' 00:13:26.667 20:40:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:26.667 20:40:09 -- common/autotest_common.sh@10 -- # set +x 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:27.235 [2024-04-15 20:40:10.589877] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.235 [2024-04-15 20:40:10.589909] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.235 [2024-04-15 20:40:10.589950] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:27.235 20:40:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:27.236 20:40:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:27.236 20:40:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.236 20:40:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.494 20:40:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:27.495 "name": "Existed_Raid", 00:13:27.495 "uuid": "4e903e77-08c6-43a2-a3fc-03f5428f68c2", 00:13:27.495 "strip_size_kb": 64, 00:13:27.495 "state": "offline", 00:13:27.495 "raid_level": "concat", 00:13:27.495 "superblock": false, 00:13:27.495 "num_base_bdevs": 3, 00:13:27.495 "num_base_bdevs_discovered": 2, 00:13:27.495 "num_base_bdevs_operational": 2, 00:13:27.495 "base_bdevs_list": [ 00:13:27.495 { 00:13:27.495 "name": null, 00:13:27.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.495 "is_configured": false, 00:13:27.495 "data_offset": 0, 00:13:27.495 "data_size": 65536 00:13:27.495 }, 00:13:27.495 { 00:13:27.495 "name": "BaseBdev2", 00:13:27.495 "uuid": "64e2ef83-82a9-425a-8220-f2f71ec5429f", 00:13:27.495 "is_configured": true, 00:13:27.495 "data_offset": 0, 00:13:27.495 "data_size": 65536 00:13:27.495 }, 00:13:27.495 { 00:13:27.495 "name": "BaseBdev3", 00:13:27.495 "uuid": "d9bcf160-8b9b-44b3-a348-9a044c61af08", 00:13:27.495 "is_configured": true, 00:13:27.495 "data_offset": 0, 00:13:27.495 "data_size": 65536 00:13:27.495 } 00:13:27.495 ] 00:13:27.495 }' 00:13:27.495 20:40:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:27.495 20:40:10 -- common/autotest_common.sh@10 -- # set +x 00:13:28.063 20:40:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:28.063 20:40:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:28.063 20:40:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.063 20:40:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:28.323 20:40:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:28.323 20:40:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:28.323 20:40:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:28.582 [2024-04-15 20:40:11.869062] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:28.582 20:40:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:28.582 20:40:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:28.582 20:40:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.582 20:40:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:28.840 20:40:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:28.840 20:40:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:28.840 20:40:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:29.100 [2024-04-15 20:40:12.363454] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:29.100 [2024-04-15 20:40:12.363501] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:13:29.100 20:40:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:29.100 20:40:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:29.100 20:40:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.100 20:40:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:29.359 20:40:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:29.359 20:40:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:29.359 20:40:12 -- bdev/bdev_raid.sh@287 -- # killprocess 50057 00:13:29.359 20:40:12 -- common/autotest_common.sh@926 -- # '[' -z 50057 ']' 00:13:29.359 20:40:12 -- common/autotest_common.sh@930 -- # kill -0 50057 00:13:29.359 20:40:12 -- common/autotest_common.sh@931 -- # uname 00:13:29.359 20:40:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:29.359 20:40:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 50057 00:13:29.359 killing process with pid 50057 00:13:29.359 20:40:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:29.359 20:40:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:29.359 20:40:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50057' 00:13:29.359 20:40:12 -- common/autotest_common.sh@945 -- # kill 50057 00:13:29.359 20:40:12 -- common/autotest_common.sh@950 -- # wait 50057 00:13:29.359 [2024-04-15 20:40:12.684023] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:29.359 [2024-04-15 20:40:12.684119] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:30.754 20:40:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:30.754 00:13:30.754 real 0m10.828s 00:13:30.754 user 0m18.662s 00:13:30.754 sys 0m1.293s 00:13:30.754 20:40:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.754 20:40:13 -- common/autotest_common.sh@10 -- # set +x 00:13:30.754 ************************************ 00:13:30.754 END TEST raid_state_function_test 00:13:30.754 ************************************ 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:13:30.754 20:40:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:30.754 20:40:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:30.754 20:40:14 -- common/autotest_common.sh@10 -- # set +x 00:13:30.754 ************************************ 00:13:30.754 START TEST raid_state_function_test_sb 00:13:30.754 ************************************ 00:13:30.754 20:40:14 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:30.754 Process raid pid: 50431 00:13:30.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=50431 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 50431' 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 50431 /var/tmp/spdk-raid.sock 00:13:30.754 20:40:14 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:30.754 20:40:14 -- common/autotest_common.sh@819 -- # '[' -z 50431 ']' 00:13:30.754 20:40:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:30.754 20:40:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:30.754 20:40:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:30.754 20:40:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:30.754 20:40:14 -- common/autotest_common.sh@10 -- # set +x 00:13:30.754 [2024-04-15 20:40:14.182458] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:30.754 [2024-04-15 20:40:14.182602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.013 [2024-04-15 20:40:14.382999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.273 [2024-04-15 20:40:14.578363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.532 [2024-04-15 20:40:14.806057] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.100 20:40:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:32.100 20:40:15 -- common/autotest_common.sh@852 -- # return 0 00:13:32.101 20:40:15 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:32.360 [2024-04-15 20:40:15.786697] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:32.360 [2024-04-15 20:40:15.786765] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:32.360 [2024-04-15 20:40:15.786776] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:32.360 [2024-04-15 20:40:15.786793] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:32.360 [2024-04-15 20:40:15.786799] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:32.360 [2024-04-15 20:40:15.786837] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.360 20:40:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.620 20:40:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:32.620 "name": "Existed_Raid", 00:13:32.620 "uuid": "d4fd45be-3f42-4d97-ac81-f87583b47177", 00:13:32.620 "strip_size_kb": 64, 00:13:32.620 "state": "configuring", 00:13:32.620 "raid_level": "concat", 00:13:32.620 "superblock": true, 00:13:32.620 "num_base_bdevs": 3, 00:13:32.620 "num_base_bdevs_discovered": 0, 00:13:32.620 "num_base_bdevs_operational": 3, 00:13:32.620 "base_bdevs_list": [ 00:13:32.620 { 00:13:32.620 "name": "BaseBdev1", 00:13:32.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.620 "is_configured": false, 00:13:32.620 "data_offset": 0, 00:13:32.620 "data_size": 0 00:13:32.620 }, 00:13:32.620 { 00:13:32.620 "name": "BaseBdev2", 00:13:32.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.620 "is_configured": false, 00:13:32.620 "data_offset": 0, 00:13:32.620 "data_size": 0 00:13:32.620 }, 00:13:32.620 { 00:13:32.620 "name": "BaseBdev3", 00:13:32.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.620 "is_configured": false, 00:13:32.620 "data_offset": 0, 00:13:32.620 "data_size": 0 00:13:32.620 } 00:13:32.620 ] 00:13:32.620 }' 00:13:32.620 20:40:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:32.620 20:40:15 -- common/autotest_common.sh@10 -- # set +x 00:13:33.188 20:40:16 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:33.188 [2024-04-15 20:40:16.621281] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:33.188 [2024-04-15 20:40:16.621320] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:33.188 20:40:16 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:33.447 [2024-04-15 20:40:16.801097] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.447 [2024-04-15 20:40:16.801163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.447 [2024-04-15 20:40:16.801174] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:33.447 [2024-04-15 20:40:16.801189] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:33.447 [2024-04-15 20:40:16.801196] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:33.447 [2024-04-15 20:40:16.801224] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:33.447 20:40:16 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:33.705 [2024-04-15 20:40:16.989165] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.705 BaseBdev1 00:13:33.705 20:40:16 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:33.705 20:40:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:33.705 20:40:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:33.705 20:40:16 -- common/autotest_common.sh@889 -- # local i 00:13:33.705 20:40:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:33.705 20:40:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:33.705 20:40:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:33.705 20:40:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:33.965 [ 00:13:33.965 { 00:13:33.965 "name": "BaseBdev1", 00:13:33.965 "aliases": [ 00:13:33.965 "9dd29f56-2dfe-4fe7-a6d3-f41e80d05eab" 00:13:33.965 ], 00:13:33.965 "product_name": "Malloc disk", 00:13:33.965 "block_size": 512, 00:13:33.965 "num_blocks": 65536, 00:13:33.965 "uuid": "9dd29f56-2dfe-4fe7-a6d3-f41e80d05eab", 00:13:33.965 "assigned_rate_limits": { 00:13:33.965 "rw_ios_per_sec": 0, 00:13:33.965 "rw_mbytes_per_sec": 0, 00:13:33.965 "r_mbytes_per_sec": 0, 00:13:33.965 "w_mbytes_per_sec": 0 00:13:33.965 }, 00:13:33.965 "claimed": true, 00:13:33.965 "claim_type": "exclusive_write", 00:13:33.965 "zoned": false, 00:13:33.965 "supported_io_types": { 00:13:33.965 "read": true, 00:13:33.965 "write": true, 00:13:33.965 "unmap": true, 00:13:33.965 "write_zeroes": true, 00:13:33.965 "flush": true, 00:13:33.965 "reset": true, 00:13:33.965 "compare": false, 00:13:33.965 "compare_and_write": false, 00:13:33.965 "abort": true, 00:13:33.965 "nvme_admin": false, 00:13:33.965 "nvme_io": false 00:13:33.965 }, 00:13:33.965 "memory_domains": [ 00:13:33.965 { 00:13:33.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.965 "dma_device_type": 2 00:13:33.965 } 00:13:33.965 ], 00:13:33.965 "driver_specific": {} 00:13:33.965 } 00:13:33.965 ] 00:13:33.965 20:40:17 -- common/autotest_common.sh@895 -- # return 0 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.965 20:40:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.224 20:40:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:34.224 "name": "Existed_Raid", 00:13:34.224 "uuid": "528c47ee-9b22-482e-94eb-794e44148762", 00:13:34.224 "strip_size_kb": 64, 00:13:34.224 "state": "configuring", 00:13:34.224 "raid_level": "concat", 00:13:34.224 "superblock": true, 00:13:34.224 "num_base_bdevs": 3, 00:13:34.224 "num_base_bdevs_discovered": 1, 00:13:34.224 "num_base_bdevs_operational": 3, 00:13:34.224 "base_bdevs_list": [ 00:13:34.224 { 00:13:34.224 "name": "BaseBdev1", 00:13:34.224 "uuid": "9dd29f56-2dfe-4fe7-a6d3-f41e80d05eab", 00:13:34.224 "is_configured": true, 00:13:34.224 "data_offset": 2048, 00:13:34.224 "data_size": 63488 00:13:34.224 }, 00:13:34.224 { 00:13:34.224 "name": "BaseBdev2", 00:13:34.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.224 "is_configured": false, 00:13:34.224 "data_offset": 0, 00:13:34.224 "data_size": 0 00:13:34.224 }, 00:13:34.224 { 00:13:34.224 "name": "BaseBdev3", 00:13:34.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.224 "is_configured": false, 00:13:34.224 "data_offset": 0, 00:13:34.224 "data_size": 0 00:13:34.224 } 00:13:34.224 ] 00:13:34.224 }' 00:13:34.224 20:40:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:34.224 20:40:17 -- common/autotest_common.sh@10 -- # set +x 00:13:34.836 20:40:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:34.836 [2024-04-15 20:40:18.203318] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:34.836 [2024-04-15 20:40:18.203361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:13:34.836 20:40:18 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:34.836 20:40:18 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:35.095 20:40:18 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:35.354 BaseBdev1 00:13:35.354 20:40:18 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:35.354 20:40:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:35.354 20:40:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:35.354 20:40:18 -- common/autotest_common.sh@889 -- # local i 00:13:35.354 20:40:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:35.354 20:40:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:35.354 20:40:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:35.612 20:40:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:35.612 [ 00:13:35.612 { 00:13:35.612 "name": "BaseBdev1", 00:13:35.612 "aliases": [ 00:13:35.612 "86fe295d-8f48-465e-ac89-826964295b53" 00:13:35.612 ], 00:13:35.612 "product_name": "Malloc disk", 00:13:35.612 "block_size": 512, 00:13:35.612 "num_blocks": 65536, 00:13:35.612 "uuid": "86fe295d-8f48-465e-ac89-826964295b53", 00:13:35.612 "assigned_rate_limits": { 00:13:35.612 "rw_ios_per_sec": 0, 00:13:35.612 "rw_mbytes_per_sec": 0, 00:13:35.612 "r_mbytes_per_sec": 0, 00:13:35.612 "w_mbytes_per_sec": 0 00:13:35.612 }, 00:13:35.612 "claimed": false, 00:13:35.612 "zoned": false, 00:13:35.612 "supported_io_types": { 00:13:35.612 "read": true, 00:13:35.612 "write": true, 00:13:35.612 "unmap": true, 00:13:35.612 "write_zeroes": true, 00:13:35.612 "flush": true, 00:13:35.612 "reset": true, 00:13:35.612 "compare": false, 00:13:35.612 "compare_and_write": false, 00:13:35.612 "abort": true, 00:13:35.612 "nvme_admin": false, 00:13:35.612 "nvme_io": false 00:13:35.612 }, 00:13:35.612 "memory_domains": [ 00:13:35.612 { 00:13:35.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.612 "dma_device_type": 2 00:13:35.612 } 00:13:35.612 ], 00:13:35.613 "driver_specific": {} 00:13:35.613 } 00:13:35.613 ] 00:13:35.613 20:40:19 -- common/autotest_common.sh@895 -- # return 0 00:13:35.613 20:40:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:35.871 [2024-04-15 20:40:19.194746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.871 [2024-04-15 20:40:19.196121] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:35.871 [2024-04-15 20:40:19.196174] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:35.871 [2024-04-15 20:40:19.196184] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:35.871 [2024-04-15 20:40:19.196205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:35.871 20:40:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:35.871 20:40:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:35.871 20:40:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:35.871 20:40:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:35.871 20:40:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:35.872 20:40:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:35.872 20:40:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:35.872 20:40:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:35.872 20:40:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:35.872 20:40:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:35.872 20:40:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:35.872 20:40:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:35.872 20:40:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.872 20:40:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.131 20:40:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:36.131 "name": "Existed_Raid", 00:13:36.131 "uuid": "d21b7845-d034-4569-aba9-bc3da31a0771", 00:13:36.131 "strip_size_kb": 64, 00:13:36.131 "state": "configuring", 00:13:36.131 "raid_level": "concat", 00:13:36.131 "superblock": true, 00:13:36.131 "num_base_bdevs": 3, 00:13:36.131 "num_base_bdevs_discovered": 1, 00:13:36.131 "num_base_bdevs_operational": 3, 00:13:36.131 "base_bdevs_list": [ 00:13:36.131 { 00:13:36.131 "name": "BaseBdev1", 00:13:36.131 "uuid": "86fe295d-8f48-465e-ac89-826964295b53", 00:13:36.131 "is_configured": true, 00:13:36.131 "data_offset": 2048, 00:13:36.131 "data_size": 63488 00:13:36.131 }, 00:13:36.131 { 00:13:36.131 "name": "BaseBdev2", 00:13:36.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.131 "is_configured": false, 00:13:36.131 "data_offset": 0, 00:13:36.131 "data_size": 0 00:13:36.131 }, 00:13:36.131 { 00:13:36.131 "name": "BaseBdev3", 00:13:36.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.131 "is_configured": false, 00:13:36.131 "data_offset": 0, 00:13:36.131 "data_size": 0 00:13:36.131 } 00:13:36.131 ] 00:13:36.131 }' 00:13:36.131 20:40:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:36.131 20:40:19 -- common/autotest_common.sh@10 -- # set +x 00:13:36.390 20:40:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:36.649 BaseBdev2 00:13:36.649 [2024-04-15 20:40:20.061883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:36.649 20:40:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:36.649 20:40:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:36.649 20:40:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:36.649 20:40:20 -- common/autotest_common.sh@889 -- # local i 00:13:36.649 20:40:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:36.649 20:40:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:36.649 20:40:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:36.907 20:40:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:37.166 [ 00:13:37.166 { 00:13:37.166 "name": "BaseBdev2", 00:13:37.166 "aliases": [ 00:13:37.166 "4136bd8e-7e24-49f8-bd08-dc8124a9f57d" 00:13:37.166 ], 00:13:37.166 "product_name": "Malloc disk", 00:13:37.166 "block_size": 512, 00:13:37.166 "num_blocks": 65536, 00:13:37.166 "uuid": "4136bd8e-7e24-49f8-bd08-dc8124a9f57d", 00:13:37.166 "assigned_rate_limits": { 00:13:37.166 "rw_ios_per_sec": 0, 00:13:37.166 "rw_mbytes_per_sec": 0, 00:13:37.166 "r_mbytes_per_sec": 0, 00:13:37.166 "w_mbytes_per_sec": 0 00:13:37.166 }, 00:13:37.166 "claimed": true, 00:13:37.166 "claim_type": "exclusive_write", 00:13:37.166 "zoned": false, 00:13:37.166 "supported_io_types": { 00:13:37.166 "read": true, 00:13:37.166 "write": true, 00:13:37.166 "unmap": true, 00:13:37.166 "write_zeroes": true, 00:13:37.166 "flush": true, 00:13:37.166 "reset": true, 00:13:37.166 "compare": false, 00:13:37.166 "compare_and_write": false, 00:13:37.166 "abort": true, 00:13:37.166 "nvme_admin": false, 00:13:37.166 "nvme_io": false 00:13:37.166 }, 00:13:37.166 "memory_domains": [ 00:13:37.166 { 00:13:37.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.166 "dma_device_type": 2 00:13:37.166 } 00:13:37.166 ], 00:13:37.166 "driver_specific": {} 00:13:37.166 } 00:13:37.166 ] 00:13:37.166 20:40:20 -- common/autotest_common.sh@895 -- # return 0 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:37.166 "name": "Existed_Raid", 00:13:37.166 "uuid": "d21b7845-d034-4569-aba9-bc3da31a0771", 00:13:37.166 "strip_size_kb": 64, 00:13:37.166 "state": "configuring", 00:13:37.166 "raid_level": "concat", 00:13:37.166 "superblock": true, 00:13:37.166 "num_base_bdevs": 3, 00:13:37.166 "num_base_bdevs_discovered": 2, 00:13:37.166 "num_base_bdevs_operational": 3, 00:13:37.166 "base_bdevs_list": [ 00:13:37.166 { 00:13:37.166 "name": "BaseBdev1", 00:13:37.166 "uuid": "86fe295d-8f48-465e-ac89-826964295b53", 00:13:37.166 "is_configured": true, 00:13:37.166 "data_offset": 2048, 00:13:37.166 "data_size": 63488 00:13:37.166 }, 00:13:37.166 { 00:13:37.166 "name": "BaseBdev2", 00:13:37.166 "uuid": "4136bd8e-7e24-49f8-bd08-dc8124a9f57d", 00:13:37.166 "is_configured": true, 00:13:37.166 "data_offset": 2048, 00:13:37.166 "data_size": 63488 00:13:37.166 }, 00:13:37.166 { 00:13:37.166 "name": "BaseBdev3", 00:13:37.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.166 "is_configured": false, 00:13:37.166 "data_offset": 0, 00:13:37.166 "data_size": 0 00:13:37.166 } 00:13:37.166 ] 00:13:37.166 }' 00:13:37.166 20:40:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:37.167 20:40:20 -- common/autotest_common.sh@10 -- # set +x 00:13:37.734 20:40:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:37.993 [2024-04-15 20:40:21.301280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:37.993 [2024-04-15 20:40:21.301413] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:13:37.993 [2024-04-15 20:40:21.301424] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:37.993 [2024-04-15 20:40:21.301501] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:37.993 BaseBdev3 00:13:37.993 [2024-04-15 20:40:21.301918] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:13:37.993 [2024-04-15 20:40:21.301937] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:13:37.993 [2024-04-15 20:40:21.302032] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.993 20:40:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:13:37.993 20:40:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:13:37.993 20:40:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:37.993 20:40:21 -- common/autotest_common.sh@889 -- # local i 00:13:37.993 20:40:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:37.993 20:40:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:37.993 20:40:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:37.993 20:40:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:38.252 [ 00:13:38.252 { 00:13:38.252 "name": "BaseBdev3", 00:13:38.252 "aliases": [ 00:13:38.252 "710b03eb-f13c-4bde-9471-eb93521c8d98" 00:13:38.252 ], 00:13:38.252 "product_name": "Malloc disk", 00:13:38.252 "block_size": 512, 00:13:38.252 "num_blocks": 65536, 00:13:38.252 "uuid": "710b03eb-f13c-4bde-9471-eb93521c8d98", 00:13:38.252 "assigned_rate_limits": { 00:13:38.252 "rw_ios_per_sec": 0, 00:13:38.252 "rw_mbytes_per_sec": 0, 00:13:38.252 "r_mbytes_per_sec": 0, 00:13:38.252 "w_mbytes_per_sec": 0 00:13:38.252 }, 00:13:38.252 "claimed": true, 00:13:38.252 "claim_type": "exclusive_write", 00:13:38.252 "zoned": false, 00:13:38.252 "supported_io_types": { 00:13:38.252 "read": true, 00:13:38.252 "write": true, 00:13:38.252 "unmap": true, 00:13:38.252 "write_zeroes": true, 00:13:38.252 "flush": true, 00:13:38.252 "reset": true, 00:13:38.252 "compare": false, 00:13:38.252 "compare_and_write": false, 00:13:38.252 "abort": true, 00:13:38.252 "nvme_admin": false, 00:13:38.252 "nvme_io": false 00:13:38.252 }, 00:13:38.252 "memory_domains": [ 00:13:38.252 { 00:13:38.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.252 "dma_device_type": 2 00:13:38.252 } 00:13:38.252 ], 00:13:38.252 "driver_specific": {} 00:13:38.252 } 00:13:38.252 ] 00:13:38.252 20:40:21 -- common/autotest_common.sh@895 -- # return 0 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.252 20:40:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.537 20:40:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:38.537 "name": "Existed_Raid", 00:13:38.537 "uuid": "d21b7845-d034-4569-aba9-bc3da31a0771", 00:13:38.537 "strip_size_kb": 64, 00:13:38.537 "state": "online", 00:13:38.537 "raid_level": "concat", 00:13:38.537 "superblock": true, 00:13:38.537 "num_base_bdevs": 3, 00:13:38.537 "num_base_bdevs_discovered": 3, 00:13:38.537 "num_base_bdevs_operational": 3, 00:13:38.537 "base_bdevs_list": [ 00:13:38.537 { 00:13:38.537 "name": "BaseBdev1", 00:13:38.537 "uuid": "86fe295d-8f48-465e-ac89-826964295b53", 00:13:38.537 "is_configured": true, 00:13:38.537 "data_offset": 2048, 00:13:38.537 "data_size": 63488 00:13:38.537 }, 00:13:38.537 { 00:13:38.537 "name": "BaseBdev2", 00:13:38.537 "uuid": "4136bd8e-7e24-49f8-bd08-dc8124a9f57d", 00:13:38.537 "is_configured": true, 00:13:38.537 "data_offset": 2048, 00:13:38.537 "data_size": 63488 00:13:38.537 }, 00:13:38.537 { 00:13:38.537 "name": "BaseBdev3", 00:13:38.537 "uuid": "710b03eb-f13c-4bde-9471-eb93521c8d98", 00:13:38.537 "is_configured": true, 00:13:38.537 "data_offset": 2048, 00:13:38.537 "data_size": 63488 00:13:38.537 } 00:13:38.537 ] 00:13:38.537 }' 00:13:38.537 20:40:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:38.537 20:40:21 -- common/autotest_common.sh@10 -- # set +x 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:39.152 [2024-04-15 20:40:22.533213] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:39.152 [2024-04-15 20:40:22.533243] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.152 [2024-04-15 20:40:22.533279] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.152 20:40:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.411 20:40:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:39.411 "name": "Existed_Raid", 00:13:39.411 "uuid": "d21b7845-d034-4569-aba9-bc3da31a0771", 00:13:39.411 "strip_size_kb": 64, 00:13:39.411 "state": "offline", 00:13:39.411 "raid_level": "concat", 00:13:39.411 "superblock": true, 00:13:39.411 "num_base_bdevs": 3, 00:13:39.411 "num_base_bdevs_discovered": 2, 00:13:39.411 "num_base_bdevs_operational": 2, 00:13:39.411 "base_bdevs_list": [ 00:13:39.411 { 00:13:39.411 "name": null, 00:13:39.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.411 "is_configured": false, 00:13:39.411 "data_offset": 2048, 00:13:39.411 "data_size": 63488 00:13:39.411 }, 00:13:39.411 { 00:13:39.411 "name": "BaseBdev2", 00:13:39.411 "uuid": "4136bd8e-7e24-49f8-bd08-dc8124a9f57d", 00:13:39.411 "is_configured": true, 00:13:39.411 "data_offset": 2048, 00:13:39.411 "data_size": 63488 00:13:39.411 }, 00:13:39.411 { 00:13:39.411 "name": "BaseBdev3", 00:13:39.411 "uuid": "710b03eb-f13c-4bde-9471-eb93521c8d98", 00:13:39.411 "is_configured": true, 00:13:39.411 "data_offset": 2048, 00:13:39.411 "data_size": 63488 00:13:39.411 } 00:13:39.411 ] 00:13:39.411 }' 00:13:39.411 20:40:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:39.411 20:40:22 -- common/autotest_common.sh@10 -- # set +x 00:13:39.979 20:40:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:39.979 20:40:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:39.979 20:40:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:39.979 20:40:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.237 20:40:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:40.237 20:40:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:40.237 20:40:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:40.237 [2024-04-15 20:40:23.639190] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:40.497 20:40:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:40.497 20:40:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:40.497 20:40:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.497 20:40:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:40.497 20:40:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:40.497 20:40:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:40.497 20:40:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:40.756 [2024-04-15 20:40:24.093999] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:40.756 [2024-04-15 20:40:24.094063] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:13:40.756 20:40:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:40.756 20:40:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:40.756 20:40:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.756 20:40:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:41.016 20:40:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:41.016 20:40:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:41.016 20:40:24 -- bdev/bdev_raid.sh@287 -- # killprocess 50431 00:13:41.016 20:40:24 -- common/autotest_common.sh@926 -- # '[' -z 50431 ']' 00:13:41.016 20:40:24 -- common/autotest_common.sh@930 -- # kill -0 50431 00:13:41.016 20:40:24 -- common/autotest_common.sh@931 -- # uname 00:13:41.016 20:40:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:41.016 20:40:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 50431 00:13:41.016 20:40:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:41.016 20:40:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:41.016 20:40:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50431' 00:13:41.016 killing process with pid 50431 00:13:41.016 20:40:24 -- common/autotest_common.sh@945 -- # kill 50431 00:13:41.016 20:40:24 -- common/autotest_common.sh@950 -- # wait 50431 00:13:41.016 [2024-04-15 20:40:24.405782] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.016 [2024-04-15 20:40:24.405887] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:42.395 00:13:42.395 real 0m11.634s 00:13:42.395 user 0m19.397s 00:13:42.395 sys 0m1.473s 00:13:42.395 ************************************ 00:13:42.395 END TEST raid_state_function_test_sb 00:13:42.395 ************************************ 00:13:42.395 20:40:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.395 20:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:13:42.395 20:40:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:42.395 20:40:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:42.395 20:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:42.395 ************************************ 00:13:42.395 START TEST raid_superblock_test 00:13:42.395 ************************************ 00:13:42.395 20:40:25 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:13:42.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@357 -- # raid_pid=50818 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@358 -- # waitforlisten 50818 /var/tmp/spdk-raid.sock 00:13:42.395 20:40:25 -- common/autotest_common.sh@819 -- # '[' -z 50818 ']' 00:13:42.395 20:40:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:42.395 20:40:25 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:42.395 20:40:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:42.395 20:40:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:42.395 20:40:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:42.395 20:40:25 -- common/autotest_common.sh@10 -- # set +x 00:13:42.395 [2024-04-15 20:40:25.887671] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:42.395 [2024-04-15 20:40:25.887820] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50818 ] 00:13:42.654 [2024-04-15 20:40:26.033965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.913 [2024-04-15 20:40:26.226379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.172 [2024-04-15 20:40:26.420625] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.740 20:40:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:43.740 20:40:27 -- common/autotest_common.sh@852 -- # return 0 00:13:43.740 20:40:27 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:13:43.740 20:40:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:43.740 20:40:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:13:43.740 20:40:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:13:43.740 20:40:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:43.740 20:40:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:43.741 20:40:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:43.741 20:40:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:43.741 20:40:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:44.000 malloc1 00:13:44.000 20:40:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:44.258 [2024-04-15 20:40:27.594061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:44.258 [2024-04-15 20:40:27.594155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.259 [2024-04-15 20:40:27.594200] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:13:44.259 [2024-04-15 20:40:27.594242] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.259 [2024-04-15 20:40:27.595943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.259 [2024-04-15 20:40:27.595993] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:44.259 pt1 00:13:44.259 20:40:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:44.259 20:40:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:44.259 20:40:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:13:44.259 20:40:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:13:44.259 20:40:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:44.259 20:40:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:44.259 20:40:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:44.259 20:40:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:44.259 20:40:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:44.517 malloc2 00:13:44.518 20:40:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:44.518 [2024-04-15 20:40:27.939536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:44.518 [2024-04-15 20:40:27.939631] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.518 [2024-04-15 20:40:27.939965] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:13:44.518 [2024-04-15 20:40:27.940011] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.518 [2024-04-15 20:40:27.941525] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.518 [2024-04-15 20:40:27.941574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:44.518 pt2 00:13:44.518 20:40:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:44.518 20:40:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:44.518 20:40:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:13:44.518 20:40:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:13:44.518 20:40:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:44.518 20:40:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:44.518 20:40:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:44.518 20:40:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:44.518 20:40:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:44.776 malloc3 00:13:44.776 20:40:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:45.033 [2024-04-15 20:40:28.304256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:45.033 [2024-04-15 20:40:28.304339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.033 [2024-04-15 20:40:28.304384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:13:45.033 [2024-04-15 20:40:28.304421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.033 [2024-04-15 20:40:28.306104] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.033 [2024-04-15 20:40:28.306157] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:45.033 pt3 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:13:45.033 [2024-04-15 20:40:28.480047] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:45.033 [2024-04-15 20:40:28.481453] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:45.033 [2024-04-15 20:40:28.481495] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:45.033 [2024-04-15 20:40:28.481596] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002c180 00:13:45.033 [2024-04-15 20:40:28.481606] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:45.033 [2024-04-15 20:40:28.481716] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:13:45.033 [2024-04-15 20:40:28.481933] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002c180 00:13:45.033 [2024-04-15 20:40:28.481943] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002c180 00:13:45.033 [2024-04-15 20:40:28.482059] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.033 20:40:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.291 20:40:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:45.291 "name": "raid_bdev1", 00:13:45.291 "uuid": "ed879c89-f5d0-492b-be51-71385d95bbc9", 00:13:45.291 "strip_size_kb": 64, 00:13:45.291 "state": "online", 00:13:45.291 "raid_level": "concat", 00:13:45.291 "superblock": true, 00:13:45.291 "num_base_bdevs": 3, 00:13:45.291 "num_base_bdevs_discovered": 3, 00:13:45.291 "num_base_bdevs_operational": 3, 00:13:45.291 "base_bdevs_list": [ 00:13:45.291 { 00:13:45.291 "name": "pt1", 00:13:45.291 "uuid": "e7e06d71-8654-5c06-a709-642d2e2295a3", 00:13:45.291 "is_configured": true, 00:13:45.292 "data_offset": 2048, 00:13:45.292 "data_size": 63488 00:13:45.292 }, 00:13:45.292 { 00:13:45.292 "name": "pt2", 00:13:45.292 "uuid": "499142aa-7c6b-5004-9493-6f9ece6e7bd6", 00:13:45.292 "is_configured": true, 00:13:45.292 "data_offset": 2048, 00:13:45.292 "data_size": 63488 00:13:45.292 }, 00:13:45.292 { 00:13:45.292 "name": "pt3", 00:13:45.292 "uuid": "f390b765-726b-55da-8256-df3bc69f2dde", 00:13:45.292 "is_configured": true, 00:13:45.292 "data_offset": 2048, 00:13:45.292 "data_size": 63488 00:13:45.292 } 00:13:45.292 ] 00:13:45.292 }' 00:13:45.292 20:40:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:45.292 20:40:28 -- common/autotest_common.sh@10 -- # set +x 00:13:45.860 20:40:29 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:45.860 20:40:29 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:45.860 [2024-04-15 20:40:29.350783] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.120 20:40:29 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=ed879c89-f5d0-492b-be51-71385d95bbc9 00:13:46.120 20:40:29 -- bdev/bdev_raid.sh@380 -- # '[' -z ed879c89-f5d0-492b-be51-71385d95bbc9 ']' 00:13:46.120 20:40:29 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:46.120 [2024-04-15 20:40:29.526397] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.120 [2024-04-15 20:40:29.526434] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.120 [2024-04-15 20:40:29.526496] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.120 [2024-04-15 20:40:29.526533] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.120 [2024-04-15 20:40:29.526541] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c180 name raid_bdev1, state offline 00:13:46.120 20:40:29 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:46.120 20:40:29 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.387 20:40:29 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:46.387 20:40:29 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:46.387 20:40:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.387 20:40:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:46.649 20:40:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.649 20:40:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:46.649 20:40:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.649 20:40:30 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:46.907 20:40:30 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:46.907 20:40:30 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:47.166 20:40:30 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:13:47.166 20:40:30 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:47.166 20:40:30 -- common/autotest_common.sh@640 -- # local es=0 00:13:47.166 20:40:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:47.166 20:40:30 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:47.166 20:40:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:47.166 20:40:30 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:47.166 20:40:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:47.166 20:40:30 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:47.166 20:40:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:47.166 20:40:30 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:47.166 20:40:30 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:47.166 20:40:30 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:13:47.425 [2024-04-15 20:40:30.668750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:47.425 [2024-04-15 20:40:30.670213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:47.425 [2024-04-15 20:40:30.670247] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:47.425 [2024-04-15 20:40:30.670273] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:13:47.425 [2024-04-15 20:40:30.670329] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:13:47.425 [2024-04-15 20:40:30.670375] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:13:47.425 [2024-04-15 20:40:30.670412] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.425 [2024-04-15 20:40:30.670422] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c780 name raid_bdev1, state configuring 00:13:47.425 request: 00:13:47.425 { 00:13:47.425 "name": "raid_bdev1", 00:13:47.425 "raid_level": "concat", 00:13:47.425 "base_bdevs": [ 00:13:47.425 "malloc1", 00:13:47.425 "malloc2", 00:13:47.425 "malloc3" 00:13:47.425 ], 00:13:47.425 "superblock": false, 00:13:47.425 "strip_size_kb": 64, 00:13:47.425 "method": "bdev_raid_create", 00:13:47.425 "req_id": 1 00:13:47.425 } 00:13:47.425 Got JSON-RPC error response 00:13:47.425 response: 00:13:47.425 { 00:13:47.425 "code": -17, 00:13:47.425 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:47.425 } 00:13:47.425 20:40:30 -- common/autotest_common.sh@643 -- # es=1 00:13:47.425 20:40:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:47.425 20:40:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:47.425 20:40:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:47.425 20:40:30 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.425 20:40:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:13:47.425 20:40:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:13:47.425 20:40:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:13:47.425 20:40:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:47.684 [2024-04-15 20:40:31.040153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:47.684 [2024-04-15 20:40:31.040226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.684 [2024-04-15 20:40:31.040276] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:13:47.684 [2024-04-15 20:40:31.040300] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.684 [2024-04-15 20:40:31.042029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.684 [2024-04-15 20:40:31.042071] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:47.684 [2024-04-15 20:40:31.042160] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:47.684 [2024-04-15 20:40:31.042222] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:47.684 pt1 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.684 20:40:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.943 20:40:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:47.943 "name": "raid_bdev1", 00:13:47.943 "uuid": "ed879c89-f5d0-492b-be51-71385d95bbc9", 00:13:47.943 "strip_size_kb": 64, 00:13:47.943 "state": "configuring", 00:13:47.943 "raid_level": "concat", 00:13:47.943 "superblock": true, 00:13:47.943 "num_base_bdevs": 3, 00:13:47.943 "num_base_bdevs_discovered": 1, 00:13:47.943 "num_base_bdevs_operational": 3, 00:13:47.943 "base_bdevs_list": [ 00:13:47.943 { 00:13:47.943 "name": "pt1", 00:13:47.943 "uuid": "e7e06d71-8654-5c06-a709-642d2e2295a3", 00:13:47.943 "is_configured": true, 00:13:47.943 "data_offset": 2048, 00:13:47.943 "data_size": 63488 00:13:47.943 }, 00:13:47.943 { 00:13:47.943 "name": null, 00:13:47.943 "uuid": "499142aa-7c6b-5004-9493-6f9ece6e7bd6", 00:13:47.943 "is_configured": false, 00:13:47.943 "data_offset": 2048, 00:13:47.943 "data_size": 63488 00:13:47.943 }, 00:13:47.943 { 00:13:47.943 "name": null, 00:13:47.943 "uuid": "f390b765-726b-55da-8256-df3bc69f2dde", 00:13:47.943 "is_configured": false, 00:13:47.943 "data_offset": 2048, 00:13:47.943 "data_size": 63488 00:13:47.943 } 00:13:47.943 ] 00:13:47.943 }' 00:13:47.943 20:40:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:47.943 20:40:31 -- common/autotest_common.sh@10 -- # set +x 00:13:48.510 20:40:31 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:13:48.510 20:40:31 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:48.769 [2024-04-15 20:40:32.050659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:48.769 [2024-04-15 20:40:32.050746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.769 [2024-04-15 20:40:32.050793] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f480 00:13:48.769 [2024-04-15 20:40:32.050814] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.769 [2024-04-15 20:40:32.051126] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.769 [2024-04-15 20:40:32.051150] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:48.769 [2024-04-15 20:40:32.051239] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:48.769 [2024-04-15 20:40:32.051259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:48.769 pt2 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:48.769 [2024-04-15 20:40:32.218422] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.769 20:40:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.028 20:40:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:49.028 "name": "raid_bdev1", 00:13:49.028 "uuid": "ed879c89-f5d0-492b-be51-71385d95bbc9", 00:13:49.028 "strip_size_kb": 64, 00:13:49.028 "state": "configuring", 00:13:49.028 "raid_level": "concat", 00:13:49.028 "superblock": true, 00:13:49.028 "num_base_bdevs": 3, 00:13:49.028 "num_base_bdevs_discovered": 1, 00:13:49.028 "num_base_bdevs_operational": 3, 00:13:49.028 "base_bdevs_list": [ 00:13:49.028 { 00:13:49.028 "name": "pt1", 00:13:49.028 "uuid": "e7e06d71-8654-5c06-a709-642d2e2295a3", 00:13:49.028 "is_configured": true, 00:13:49.028 "data_offset": 2048, 00:13:49.028 "data_size": 63488 00:13:49.028 }, 00:13:49.028 { 00:13:49.028 "name": null, 00:13:49.028 "uuid": "499142aa-7c6b-5004-9493-6f9ece6e7bd6", 00:13:49.028 "is_configured": false, 00:13:49.028 "data_offset": 2048, 00:13:49.028 "data_size": 63488 00:13:49.028 }, 00:13:49.028 { 00:13:49.028 "name": null, 00:13:49.028 "uuid": "f390b765-726b-55da-8256-df3bc69f2dde", 00:13:49.028 "is_configured": false, 00:13:49.028 "data_offset": 2048, 00:13:49.028 "data_size": 63488 00:13:49.028 } 00:13:49.028 ] 00:13:49.028 }' 00:13:49.028 20:40:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:49.028 20:40:32 -- common/autotest_common.sh@10 -- # set +x 00:13:49.595 20:40:33 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:49.595 20:40:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:49.595 20:40:33 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:49.853 [2024-04-15 20:40:33.281010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:49.853 [2024-04-15 20:40:33.281097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.853 [2024-04-15 20:40:33.281153] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030c80 00:13:49.853 [2024-04-15 20:40:33.281179] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.853 [2024-04-15 20:40:33.281510] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.853 [2024-04-15 20:40:33.281538] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:49.853 [2024-04-15 20:40:33.281619] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:49.853 [2024-04-15 20:40:33.281639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:49.853 pt2 00:13:49.853 20:40:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:49.853 20:40:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:49.853 20:40:33 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:50.111 [2024-04-15 20:40:33.524707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:50.111 [2024-04-15 20:40:33.524801] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.111 [2024-04-15 20:40:33.524841] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032180 00:13:50.111 [2024-04-15 20:40:33.524868] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.111 [2024-04-15 20:40:33.525191] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.111 [2024-04-15 20:40:33.525221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:50.111 [2024-04-15 20:40:33.525326] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:50.111 [2024-04-15 20:40:33.525346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:50.111 [2024-04-15 20:40:33.525413] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002ee80 00:13:50.111 [2024-04-15 20:40:33.525422] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:50.111 [2024-04-15 20:40:33.525500] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:50.111 pt3 00:13:50.111 [2024-04-15 20:40:33.526055] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002ee80 00:13:50.111 [2024-04-15 20:40:33.526124] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002ee80 00:13:50.111 [2024-04-15 20:40:33.526418] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:50.111 20:40:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.112 20:40:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.372 20:40:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:50.372 "name": "raid_bdev1", 00:13:50.372 "uuid": "ed879c89-f5d0-492b-be51-71385d95bbc9", 00:13:50.372 "strip_size_kb": 64, 00:13:50.372 "state": "online", 00:13:50.372 "raid_level": "concat", 00:13:50.372 "superblock": true, 00:13:50.372 "num_base_bdevs": 3, 00:13:50.372 "num_base_bdevs_discovered": 3, 00:13:50.372 "num_base_bdevs_operational": 3, 00:13:50.372 "base_bdevs_list": [ 00:13:50.372 { 00:13:50.372 "name": "pt1", 00:13:50.372 "uuid": "e7e06d71-8654-5c06-a709-642d2e2295a3", 00:13:50.372 "is_configured": true, 00:13:50.372 "data_offset": 2048, 00:13:50.372 "data_size": 63488 00:13:50.372 }, 00:13:50.372 { 00:13:50.372 "name": "pt2", 00:13:50.372 "uuid": "499142aa-7c6b-5004-9493-6f9ece6e7bd6", 00:13:50.372 "is_configured": true, 00:13:50.372 "data_offset": 2048, 00:13:50.372 "data_size": 63488 00:13:50.372 }, 00:13:50.372 { 00:13:50.372 "name": "pt3", 00:13:50.372 "uuid": "f390b765-726b-55da-8256-df3bc69f2dde", 00:13:50.372 "is_configured": true, 00:13:50.372 "data_offset": 2048, 00:13:50.372 "data_size": 63488 00:13:50.372 } 00:13:50.372 ] 00:13:50.372 }' 00:13:50.372 20:40:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:50.372 20:40:33 -- common/autotest_common.sh@10 -- # set +x 00:13:51.311 20:40:34 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:51.311 20:40:34 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:51.311 [2024-04-15 20:40:34.651127] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.311 20:40:34 -- bdev/bdev_raid.sh@430 -- # '[' ed879c89-f5d0-492b-be51-71385d95bbc9 '!=' ed879c89-f5d0-492b-be51-71385d95bbc9 ']' 00:13:51.311 20:40:34 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:13:51.311 20:40:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:51.311 20:40:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:51.311 20:40:34 -- bdev/bdev_raid.sh@511 -- # killprocess 50818 00:13:51.311 20:40:34 -- common/autotest_common.sh@926 -- # '[' -z 50818 ']' 00:13:51.311 20:40:34 -- common/autotest_common.sh@930 -- # kill -0 50818 00:13:51.311 20:40:34 -- common/autotest_common.sh@931 -- # uname 00:13:51.311 20:40:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:51.311 20:40:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 50818 00:13:51.311 killing process with pid 50818 00:13:51.311 20:40:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:51.311 20:40:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:51.311 20:40:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 50818' 00:13:51.311 20:40:34 -- common/autotest_common.sh@945 -- # kill 50818 00:13:51.311 20:40:34 -- common/autotest_common.sh@950 -- # wait 50818 00:13:51.311 [2024-04-15 20:40:34.690560] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.311 [2024-04-15 20:40:34.690631] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.311 [2024-04-15 20:40:34.690677] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.311 [2024-04-15 20:40:34.690688] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002ee80 name raid_bdev1, state offline 00:13:51.570 [2024-04-15 20:40:34.981815] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:52.948 00:13:52.948 real 0m10.618s 00:13:52.948 user 0m17.487s 00:13:52.948 sys 0m1.272s 00:13:52.948 20:40:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.948 20:40:36 -- common/autotest_common.sh@10 -- # set +x 00:13:52.948 ************************************ 00:13:52.948 END TEST raid_superblock_test 00:13:52.948 ************************************ 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:52.948 20:40:36 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:52.948 20:40:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.948 20:40:36 -- common/autotest_common.sh@10 -- # set +x 00:13:52.948 ************************************ 00:13:52.948 START TEST raid_state_function_test 00:13:52.948 ************************************ 00:13:52.948 20:40:36 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:52.948 Process raid pid: 51126 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=51126 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51126' 00:13:52.948 20:40:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51126 /var/tmp/spdk-raid.sock 00:13:52.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:52.948 20:40:36 -- common/autotest_common.sh@819 -- # '[' -z 51126 ']' 00:13:52.948 20:40:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:52.948 20:40:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:52.948 20:40:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:52.948 20:40:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:52.948 20:40:36 -- common/autotest_common.sh@10 -- # set +x 00:13:53.206 [2024-04-15 20:40:36.562732] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:53.206 [2024-04-15 20:40:36.562879] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.469 [2024-04-15 20:40:36.720407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.469 [2024-04-15 20:40:36.932577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.730 [2024-04-15 20:40:37.147915] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.989 20:40:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:53.989 20:40:37 -- common/autotest_common.sh@852 -- # return 0 00:13:53.989 20:40:37 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:54.249 [2024-04-15 20:40:37.513339] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.249 [2024-04-15 20:40:37.513423] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.249 [2024-04-15 20:40:37.513437] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.249 [2024-04-15 20:40:37.513459] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.249 [2024-04-15 20:40:37.513469] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:54.249 [2024-04-15 20:40:37.513513] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.249 20:40:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.509 20:40:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:54.509 "name": "Existed_Raid", 00:13:54.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.509 "strip_size_kb": 0, 00:13:54.509 "state": "configuring", 00:13:54.509 "raid_level": "raid1", 00:13:54.509 "superblock": false, 00:13:54.509 "num_base_bdevs": 3, 00:13:54.509 "num_base_bdevs_discovered": 0, 00:13:54.509 "num_base_bdevs_operational": 3, 00:13:54.509 "base_bdevs_list": [ 00:13:54.509 { 00:13:54.509 "name": "BaseBdev1", 00:13:54.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.509 "is_configured": false, 00:13:54.509 "data_offset": 0, 00:13:54.509 "data_size": 0 00:13:54.509 }, 00:13:54.509 { 00:13:54.509 "name": "BaseBdev2", 00:13:54.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.509 "is_configured": false, 00:13:54.509 "data_offset": 0, 00:13:54.509 "data_size": 0 00:13:54.509 }, 00:13:54.509 { 00:13:54.509 "name": "BaseBdev3", 00:13:54.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.509 "is_configured": false, 00:13:54.509 "data_offset": 0, 00:13:54.509 "data_size": 0 00:13:54.509 } 00:13:54.509 ] 00:13:54.509 }' 00:13:54.509 20:40:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:54.509 20:40:37 -- common/autotest_common.sh@10 -- # set +x 00:13:55.076 20:40:38 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:55.076 [2024-04-15 20:40:38.543713] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.076 [2024-04-15 20:40:38.543762] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:13:55.076 20:40:38 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:55.334 [2024-04-15 20:40:38.735416] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.334 [2024-04-15 20:40:38.735498] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.334 [2024-04-15 20:40:38.735511] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:55.334 [2024-04-15 20:40:38.735528] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:55.334 [2024-04-15 20:40:38.735536] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:55.334 [2024-04-15 20:40:38.735567] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:55.334 20:40:38 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:55.592 [2024-04-15 20:40:38.963810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.592 BaseBdev1 00:13:55.592 20:40:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:55.592 20:40:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:55.592 20:40:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:55.592 20:40:38 -- common/autotest_common.sh@889 -- # local i 00:13:55.592 20:40:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:55.592 20:40:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:55.592 20:40:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:55.852 20:40:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.111 [ 00:13:56.111 { 00:13:56.111 "name": "BaseBdev1", 00:13:56.111 "aliases": [ 00:13:56.111 "1519ce16-2e96-430f-8b64-edd5c5fdfc07" 00:13:56.111 ], 00:13:56.111 "product_name": "Malloc disk", 00:13:56.111 "block_size": 512, 00:13:56.111 "num_blocks": 65536, 00:13:56.111 "uuid": "1519ce16-2e96-430f-8b64-edd5c5fdfc07", 00:13:56.111 "assigned_rate_limits": { 00:13:56.111 "rw_ios_per_sec": 0, 00:13:56.111 "rw_mbytes_per_sec": 0, 00:13:56.111 "r_mbytes_per_sec": 0, 00:13:56.111 "w_mbytes_per_sec": 0 00:13:56.111 }, 00:13:56.111 "claimed": true, 00:13:56.111 "claim_type": "exclusive_write", 00:13:56.111 "zoned": false, 00:13:56.111 "supported_io_types": { 00:13:56.111 "read": true, 00:13:56.111 "write": true, 00:13:56.111 "unmap": true, 00:13:56.111 "write_zeroes": true, 00:13:56.111 "flush": true, 00:13:56.111 "reset": true, 00:13:56.111 "compare": false, 00:13:56.111 "compare_and_write": false, 00:13:56.111 "abort": true, 00:13:56.111 "nvme_admin": false, 00:13:56.111 "nvme_io": false 00:13:56.111 }, 00:13:56.111 "memory_domains": [ 00:13:56.111 { 00:13:56.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.111 "dma_device_type": 2 00:13:56.111 } 00:13:56.111 ], 00:13:56.111 "driver_specific": {} 00:13:56.111 } 00:13:56.111 ] 00:13:56.111 20:40:39 -- common/autotest_common.sh@895 -- # return 0 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:56.111 "name": "Existed_Raid", 00:13:56.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.111 "strip_size_kb": 0, 00:13:56.111 "state": "configuring", 00:13:56.111 "raid_level": "raid1", 00:13:56.111 "superblock": false, 00:13:56.111 "num_base_bdevs": 3, 00:13:56.111 "num_base_bdevs_discovered": 1, 00:13:56.111 "num_base_bdevs_operational": 3, 00:13:56.111 "base_bdevs_list": [ 00:13:56.111 { 00:13:56.111 "name": "BaseBdev1", 00:13:56.111 "uuid": "1519ce16-2e96-430f-8b64-edd5c5fdfc07", 00:13:56.111 "is_configured": true, 00:13:56.111 "data_offset": 0, 00:13:56.111 "data_size": 65536 00:13:56.111 }, 00:13:56.111 { 00:13:56.111 "name": "BaseBdev2", 00:13:56.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.111 "is_configured": false, 00:13:56.111 "data_offset": 0, 00:13:56.111 "data_size": 0 00:13:56.111 }, 00:13:56.111 { 00:13:56.111 "name": "BaseBdev3", 00:13:56.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.111 "is_configured": false, 00:13:56.111 "data_offset": 0, 00:13:56.111 "data_size": 0 00:13:56.111 } 00:13:56.111 ] 00:13:56.111 }' 00:13:56.111 20:40:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:56.111 20:40:39 -- common/autotest_common.sh@10 -- # set +x 00:13:56.679 20:40:40 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:56.936 [2024-04-15 20:40:40.273891] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.936 [2024-04-15 20:40:40.273946] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:13:56.936 20:40:40 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:56.937 20:40:40 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:57.194 [2024-04-15 20:40:40.461630] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.194 [2024-04-15 20:40:40.463171] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.194 [2024-04-15 20:40:40.463241] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.194 [2024-04-15 20:40:40.463251] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:57.194 [2024-04-15 20:40:40.463293] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.194 20:40:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.452 20:40:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:57.452 "name": "Existed_Raid", 00:13:57.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.452 "strip_size_kb": 0, 00:13:57.452 "state": "configuring", 00:13:57.452 "raid_level": "raid1", 00:13:57.452 "superblock": false, 00:13:57.452 "num_base_bdevs": 3, 00:13:57.452 "num_base_bdevs_discovered": 1, 00:13:57.452 "num_base_bdevs_operational": 3, 00:13:57.452 "base_bdevs_list": [ 00:13:57.452 { 00:13:57.452 "name": "BaseBdev1", 00:13:57.452 "uuid": "1519ce16-2e96-430f-8b64-edd5c5fdfc07", 00:13:57.452 "is_configured": true, 00:13:57.452 "data_offset": 0, 00:13:57.452 "data_size": 65536 00:13:57.452 }, 00:13:57.452 { 00:13:57.452 "name": "BaseBdev2", 00:13:57.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.452 "is_configured": false, 00:13:57.452 "data_offset": 0, 00:13:57.452 "data_size": 0 00:13:57.452 }, 00:13:57.452 { 00:13:57.452 "name": "BaseBdev3", 00:13:57.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.452 "is_configured": false, 00:13:57.452 "data_offset": 0, 00:13:57.452 "data_size": 0 00:13:57.452 } 00:13:57.452 ] 00:13:57.452 }' 00:13:57.452 20:40:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:57.452 20:40:40 -- common/autotest_common.sh@10 -- # set +x 00:13:57.767 20:40:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:58.025 [2024-04-15 20:40:41.427509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.025 BaseBdev2 00:13:58.025 20:40:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:58.025 20:40:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:58.025 20:40:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:58.025 20:40:41 -- common/autotest_common.sh@889 -- # local i 00:13:58.025 20:40:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:58.025 20:40:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:58.025 20:40:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:58.283 20:40:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:58.543 [ 00:13:58.543 { 00:13:58.543 "name": "BaseBdev2", 00:13:58.543 "aliases": [ 00:13:58.543 "161c59d7-f1ab-41d4-9ec6-4a0dc8adcd8d" 00:13:58.543 ], 00:13:58.543 "product_name": "Malloc disk", 00:13:58.543 "block_size": 512, 00:13:58.544 "num_blocks": 65536, 00:13:58.544 "uuid": "161c59d7-f1ab-41d4-9ec6-4a0dc8adcd8d", 00:13:58.544 "assigned_rate_limits": { 00:13:58.544 "rw_ios_per_sec": 0, 00:13:58.544 "rw_mbytes_per_sec": 0, 00:13:58.544 "r_mbytes_per_sec": 0, 00:13:58.544 "w_mbytes_per_sec": 0 00:13:58.544 }, 00:13:58.544 "claimed": true, 00:13:58.544 "claim_type": "exclusive_write", 00:13:58.544 "zoned": false, 00:13:58.544 "supported_io_types": { 00:13:58.544 "read": true, 00:13:58.544 "write": true, 00:13:58.544 "unmap": true, 00:13:58.544 "write_zeroes": true, 00:13:58.544 "flush": true, 00:13:58.544 "reset": true, 00:13:58.544 "compare": false, 00:13:58.544 "compare_and_write": false, 00:13:58.544 "abort": true, 00:13:58.544 "nvme_admin": false, 00:13:58.544 "nvme_io": false 00:13:58.544 }, 00:13:58.544 "memory_domains": [ 00:13:58.544 { 00:13:58.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.544 "dma_device_type": 2 00:13:58.544 } 00:13:58.544 ], 00:13:58.544 "driver_specific": {} 00:13:58.544 } 00:13:58.544 ] 00:13:58.544 20:40:41 -- common/autotest_common.sh@895 -- # return 0 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:58.544 "name": "Existed_Raid", 00:13:58.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.544 "strip_size_kb": 0, 00:13:58.544 "state": "configuring", 00:13:58.544 "raid_level": "raid1", 00:13:58.544 "superblock": false, 00:13:58.544 "num_base_bdevs": 3, 00:13:58.544 "num_base_bdevs_discovered": 2, 00:13:58.544 "num_base_bdevs_operational": 3, 00:13:58.544 "base_bdevs_list": [ 00:13:58.544 { 00:13:58.544 "name": "BaseBdev1", 00:13:58.544 "uuid": "1519ce16-2e96-430f-8b64-edd5c5fdfc07", 00:13:58.544 "is_configured": true, 00:13:58.544 "data_offset": 0, 00:13:58.544 "data_size": 65536 00:13:58.544 }, 00:13:58.544 { 00:13:58.544 "name": "BaseBdev2", 00:13:58.544 "uuid": "161c59d7-f1ab-41d4-9ec6-4a0dc8adcd8d", 00:13:58.544 "is_configured": true, 00:13:58.544 "data_offset": 0, 00:13:58.544 "data_size": 65536 00:13:58.544 }, 00:13:58.544 { 00:13:58.544 "name": "BaseBdev3", 00:13:58.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.544 "is_configured": false, 00:13:58.544 "data_offset": 0, 00:13:58.544 "data_size": 0 00:13:58.544 } 00:13:58.544 ] 00:13:58.544 }' 00:13:58.544 20:40:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:58.544 20:40:41 -- common/autotest_common.sh@10 -- # set +x 00:13:59.112 20:40:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:59.371 [2024-04-15 20:40:42.699521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.371 [2024-04-15 20:40:42.699569] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028580 00:13:59.371 [2024-04-15 20:40:42.699577] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:59.371 [2024-04-15 20:40:42.699950] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:59.371 [2024-04-15 20:40:42.700166] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028580 00:13:59.371 [2024-04-15 20:40:42.700177] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028580 00:13:59.371 [2024-04-15 20:40:42.700341] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.371 BaseBdev3 00:13:59.371 20:40:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:13:59.371 20:40:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:13:59.371 20:40:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:59.371 20:40:42 -- common/autotest_common.sh@889 -- # local i 00:13:59.371 20:40:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:59.371 20:40:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:59.371 20:40:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:59.629 20:40:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:59.629 [ 00:13:59.629 { 00:13:59.629 "name": "BaseBdev3", 00:13:59.629 "aliases": [ 00:13:59.629 "ea3909a3-dac1-49e0-a575-b09f87bdfbaa" 00:13:59.629 ], 00:13:59.629 "product_name": "Malloc disk", 00:13:59.629 "block_size": 512, 00:13:59.629 "num_blocks": 65536, 00:13:59.629 "uuid": "ea3909a3-dac1-49e0-a575-b09f87bdfbaa", 00:13:59.629 "assigned_rate_limits": { 00:13:59.630 "rw_ios_per_sec": 0, 00:13:59.630 "rw_mbytes_per_sec": 0, 00:13:59.630 "r_mbytes_per_sec": 0, 00:13:59.630 "w_mbytes_per_sec": 0 00:13:59.630 }, 00:13:59.630 "claimed": true, 00:13:59.630 "claim_type": "exclusive_write", 00:13:59.630 "zoned": false, 00:13:59.630 "supported_io_types": { 00:13:59.630 "read": true, 00:13:59.630 "write": true, 00:13:59.630 "unmap": true, 00:13:59.630 "write_zeroes": true, 00:13:59.630 "flush": true, 00:13:59.630 "reset": true, 00:13:59.630 "compare": false, 00:13:59.630 "compare_and_write": false, 00:13:59.630 "abort": true, 00:13:59.630 "nvme_admin": false, 00:13:59.630 "nvme_io": false 00:13:59.630 }, 00:13:59.630 "memory_domains": [ 00:13:59.630 { 00:13:59.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.630 "dma_device_type": 2 00:13:59.630 } 00:13:59.630 ], 00:13:59.630 "driver_specific": {} 00:13:59.630 } 00:13:59.630 ] 00:13:59.630 20:40:43 -- common/autotest_common.sh@895 -- # return 0 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.630 20:40:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.887 20:40:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:59.888 "name": "Existed_Raid", 00:13:59.888 "uuid": "8429fdfc-7fa9-4b47-afb7-e43d0d30118c", 00:13:59.888 "strip_size_kb": 0, 00:13:59.888 "state": "online", 00:13:59.888 "raid_level": "raid1", 00:13:59.888 "superblock": false, 00:13:59.888 "num_base_bdevs": 3, 00:13:59.888 "num_base_bdevs_discovered": 3, 00:13:59.888 "num_base_bdevs_operational": 3, 00:13:59.888 "base_bdevs_list": [ 00:13:59.888 { 00:13:59.888 "name": "BaseBdev1", 00:13:59.888 "uuid": "1519ce16-2e96-430f-8b64-edd5c5fdfc07", 00:13:59.888 "is_configured": true, 00:13:59.888 "data_offset": 0, 00:13:59.888 "data_size": 65536 00:13:59.888 }, 00:13:59.888 { 00:13:59.888 "name": "BaseBdev2", 00:13:59.888 "uuid": "161c59d7-f1ab-41d4-9ec6-4a0dc8adcd8d", 00:13:59.888 "is_configured": true, 00:13:59.888 "data_offset": 0, 00:13:59.888 "data_size": 65536 00:13:59.888 }, 00:13:59.888 { 00:13:59.888 "name": "BaseBdev3", 00:13:59.888 "uuid": "ea3909a3-dac1-49e0-a575-b09f87bdfbaa", 00:13:59.888 "is_configured": true, 00:13:59.888 "data_offset": 0, 00:13:59.888 "data_size": 65536 00:13:59.888 } 00:13:59.888 ] 00:13:59.888 }' 00:13:59.888 20:40:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:59.888 20:40:43 -- common/autotest_common.sh@10 -- # set +x 00:14:00.456 20:40:43 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:00.715 [2024-04-15 20:40:43.977703] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.715 20:40:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.975 20:40:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:00.975 "name": "Existed_Raid", 00:14:00.975 "uuid": "8429fdfc-7fa9-4b47-afb7-e43d0d30118c", 00:14:00.975 "strip_size_kb": 0, 00:14:00.975 "state": "online", 00:14:00.975 "raid_level": "raid1", 00:14:00.975 "superblock": false, 00:14:00.975 "num_base_bdevs": 3, 00:14:00.975 "num_base_bdevs_discovered": 2, 00:14:00.975 "num_base_bdevs_operational": 2, 00:14:00.975 "base_bdevs_list": [ 00:14:00.975 { 00:14:00.975 "name": null, 00:14:00.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.975 "is_configured": false, 00:14:00.975 "data_offset": 0, 00:14:00.975 "data_size": 65536 00:14:00.975 }, 00:14:00.975 { 00:14:00.975 "name": "BaseBdev2", 00:14:00.975 "uuid": "161c59d7-f1ab-41d4-9ec6-4a0dc8adcd8d", 00:14:00.975 "is_configured": true, 00:14:00.975 "data_offset": 0, 00:14:00.975 "data_size": 65536 00:14:00.975 }, 00:14:00.975 { 00:14:00.975 "name": "BaseBdev3", 00:14:00.975 "uuid": "ea3909a3-dac1-49e0-a575-b09f87bdfbaa", 00:14:00.975 "is_configured": true, 00:14:00.975 "data_offset": 0, 00:14:00.975 "data_size": 65536 00:14:00.975 } 00:14:00.975 ] 00:14:00.975 }' 00:14:00.975 20:40:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:00.975 20:40:44 -- common/autotest_common.sh@10 -- # set +x 00:14:01.546 20:40:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:01.546 20:40:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:01.546 20:40:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.546 20:40:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:01.810 20:40:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:01.810 20:40:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:01.810 20:40:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:02.070 [2024-04-15 20:40:45.311442] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:02.070 20:40:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:02.070 20:40:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:02.070 20:40:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.070 20:40:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:02.329 20:40:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:02.329 20:40:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:02.329 20:40:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:02.329 [2024-04-15 20:40:45.753332] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:02.329 [2024-04-15 20:40:45.753372] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.329 [2024-04-15 20:40:45.753412] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.588 [2024-04-15 20:40:45.843870] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.588 [2024-04-15 20:40:45.843925] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028580 name Existed_Raid, state offline 00:14:02.588 20:40:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:02.588 20:40:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:02.588 20:40:45 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.588 20:40:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:02.588 20:40:46 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:02.588 20:40:46 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:02.588 20:40:46 -- bdev/bdev_raid.sh@287 -- # killprocess 51126 00:14:02.588 20:40:46 -- common/autotest_common.sh@926 -- # '[' -z 51126 ']' 00:14:02.588 20:40:46 -- common/autotest_common.sh@930 -- # kill -0 51126 00:14:02.588 20:40:46 -- common/autotest_common.sh@931 -- # uname 00:14:02.588 20:40:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:02.588 20:40:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 51126 00:14:02.588 killing process with pid 51126 00:14:02.588 20:40:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:02.588 20:40:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:02.588 20:40:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51126' 00:14:02.588 20:40:46 -- common/autotest_common.sh@945 -- # kill 51126 00:14:02.588 20:40:46 -- common/autotest_common.sh@950 -- # wait 51126 00:14:02.588 [2024-04-15 20:40:46.070880] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.588 [2024-04-15 20:40:46.070995] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.970 ************************************ 00:14:03.970 END TEST raid_state_function_test 00:14:03.970 ************************************ 00:14:03.970 20:40:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:03.970 00:14:03.970 real 0m10.979s 00:14:03.970 user 0m18.762s 00:14:03.970 sys 0m1.419s 00:14:03.970 20:40:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:03.970 20:40:47 -- common/autotest_common.sh@10 -- # set +x 00:14:03.970 20:40:47 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:14:03.970 20:40:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:03.970 20:40:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:03.970 20:40:47 -- common/autotest_common.sh@10 -- # set +x 00:14:03.970 ************************************ 00:14:03.970 START TEST raid_state_function_test_sb 00:14:03.970 ************************************ 00:14:03.970 20:40:47 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:14:03.970 20:40:47 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:03.970 20:40:47 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:14:03.970 20:40:47 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:03.970 20:40:47 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:03.970 20:40:47 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:04.229 Process raid pid: 51498 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@226 -- # raid_pid=51498 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 51498' 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@228 -- # waitforlisten 51498 /var/tmp/spdk-raid.sock 00:14:04.229 20:40:47 -- common/autotest_common.sh@819 -- # '[' -z 51498 ']' 00:14:04.229 20:40:47 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:04.229 20:40:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:04.229 20:40:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:04.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:04.229 20:40:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:04.229 20:40:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:04.229 20:40:47 -- common/autotest_common.sh@10 -- # set +x 00:14:04.229 [2024-04-15 20:40:47.621627] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:04.229 [2024-04-15 20:40:47.621817] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.487 [2024-04-15 20:40:47.795104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.749 [2024-04-15 20:40:48.001291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.749 [2024-04-15 20:40:48.202299] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.687 20:40:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:05.687 20:40:49 -- common/autotest_common.sh@852 -- # return 0 00:14:05.687 20:40:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:05.946 [2024-04-15 20:40:49.244317] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:05.946 [2024-04-15 20:40:49.244395] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:05.946 [2024-04-15 20:40:49.244407] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:05.946 [2024-04-15 20:40:49.244426] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:05.946 [2024-04-15 20:40:49.244433] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:05.946 [2024-04-15 20:40:49.244474] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.946 20:40:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.206 20:40:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:06.206 "name": "Existed_Raid", 00:14:06.206 "uuid": "15687c45-04f9-4bba-a1ed-501b4b545548", 00:14:06.206 "strip_size_kb": 0, 00:14:06.206 "state": "configuring", 00:14:06.206 "raid_level": "raid1", 00:14:06.206 "superblock": true, 00:14:06.206 "num_base_bdevs": 3, 00:14:06.206 "num_base_bdevs_discovered": 0, 00:14:06.206 "num_base_bdevs_operational": 3, 00:14:06.206 "base_bdevs_list": [ 00:14:06.206 { 00:14:06.206 "name": "BaseBdev1", 00:14:06.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.206 "is_configured": false, 00:14:06.206 "data_offset": 0, 00:14:06.206 "data_size": 0 00:14:06.206 }, 00:14:06.206 { 00:14:06.206 "name": "BaseBdev2", 00:14:06.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.206 "is_configured": false, 00:14:06.206 "data_offset": 0, 00:14:06.206 "data_size": 0 00:14:06.206 }, 00:14:06.206 { 00:14:06.206 "name": "BaseBdev3", 00:14:06.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.206 "is_configured": false, 00:14:06.206 "data_offset": 0, 00:14:06.206 "data_size": 0 00:14:06.206 } 00:14:06.206 ] 00:14:06.206 }' 00:14:06.206 20:40:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:06.206 20:40:49 -- common/autotest_common.sh@10 -- # set +x 00:14:06.464 20:40:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:06.723 [2024-04-15 20:40:50.102847] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:06.723 [2024-04-15 20:40:50.102895] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:14:06.723 20:40:50 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:06.981 [2024-04-15 20:40:50.282671] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:06.981 [2024-04-15 20:40:50.282739] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:06.981 [2024-04-15 20:40:50.282750] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:06.981 [2024-04-15 20:40:50.282766] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:06.981 [2024-04-15 20:40:50.282773] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:06.981 [2024-04-15 20:40:50.282802] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:06.981 20:40:50 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:07.240 [2024-04-15 20:40:50.485174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.240 BaseBdev1 00:14:07.240 20:40:50 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:07.240 20:40:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:07.240 20:40:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:07.240 20:40:50 -- common/autotest_common.sh@889 -- # local i 00:14:07.240 20:40:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:07.240 20:40:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:07.241 20:40:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:07.241 20:40:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:07.500 [ 00:14:07.500 { 00:14:07.500 "name": "BaseBdev1", 00:14:07.500 "aliases": [ 00:14:07.500 "e5faf988-4671-4471-bd8a-ef0de20775d5" 00:14:07.500 ], 00:14:07.500 "product_name": "Malloc disk", 00:14:07.500 "block_size": 512, 00:14:07.500 "num_blocks": 65536, 00:14:07.500 "uuid": "e5faf988-4671-4471-bd8a-ef0de20775d5", 00:14:07.500 "assigned_rate_limits": { 00:14:07.500 "rw_ios_per_sec": 0, 00:14:07.500 "rw_mbytes_per_sec": 0, 00:14:07.500 "r_mbytes_per_sec": 0, 00:14:07.500 "w_mbytes_per_sec": 0 00:14:07.500 }, 00:14:07.500 "claimed": true, 00:14:07.500 "claim_type": "exclusive_write", 00:14:07.500 "zoned": false, 00:14:07.500 "supported_io_types": { 00:14:07.500 "read": true, 00:14:07.500 "write": true, 00:14:07.500 "unmap": true, 00:14:07.500 "write_zeroes": true, 00:14:07.500 "flush": true, 00:14:07.500 "reset": true, 00:14:07.500 "compare": false, 00:14:07.500 "compare_and_write": false, 00:14:07.500 "abort": true, 00:14:07.500 "nvme_admin": false, 00:14:07.500 "nvme_io": false 00:14:07.500 }, 00:14:07.500 "memory_domains": [ 00:14:07.500 { 00:14:07.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.500 "dma_device_type": 2 00:14:07.500 } 00:14:07.500 ], 00:14:07.500 "driver_specific": {} 00:14:07.500 } 00:14:07.500 ] 00:14:07.500 20:40:50 -- common/autotest_common.sh@895 -- # return 0 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.500 20:40:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.758 20:40:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:07.758 "name": "Existed_Raid", 00:14:07.758 "uuid": "d5e6beb2-af8b-4a6c-ae7e-6f796628b788", 00:14:07.758 "strip_size_kb": 0, 00:14:07.758 "state": "configuring", 00:14:07.758 "raid_level": "raid1", 00:14:07.758 "superblock": true, 00:14:07.758 "num_base_bdevs": 3, 00:14:07.758 "num_base_bdevs_discovered": 1, 00:14:07.758 "num_base_bdevs_operational": 3, 00:14:07.758 "base_bdevs_list": [ 00:14:07.758 { 00:14:07.758 "name": "BaseBdev1", 00:14:07.758 "uuid": "e5faf988-4671-4471-bd8a-ef0de20775d5", 00:14:07.758 "is_configured": true, 00:14:07.758 "data_offset": 2048, 00:14:07.758 "data_size": 63488 00:14:07.758 }, 00:14:07.758 { 00:14:07.758 "name": "BaseBdev2", 00:14:07.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.758 "is_configured": false, 00:14:07.758 "data_offset": 0, 00:14:07.758 "data_size": 0 00:14:07.758 }, 00:14:07.758 { 00:14:07.758 "name": "BaseBdev3", 00:14:07.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.758 "is_configured": false, 00:14:07.758 "data_offset": 0, 00:14:07.758 "data_size": 0 00:14:07.758 } 00:14:07.758 ] 00:14:07.758 }' 00:14:07.758 20:40:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:07.758 20:40:51 -- common/autotest_common.sh@10 -- # set +x 00:14:08.326 20:40:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:08.585 [2024-04-15 20:40:51.843140] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.585 [2024-04-15 20:40:51.843195] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027380 name Existed_Raid, state configuring 00:14:08.585 20:40:51 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:08.585 20:40:51 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:08.843 20:40:52 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.843 BaseBdev1 00:14:08.843 20:40:52 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:08.843 20:40:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:08.843 20:40:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:08.843 20:40:52 -- common/autotest_common.sh@889 -- # local i 00:14:08.843 20:40:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:08.843 20:40:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:08.843 20:40:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.101 20:40:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.101 [ 00:14:09.101 { 00:14:09.101 "name": "BaseBdev1", 00:14:09.101 "aliases": [ 00:14:09.101 "0f64f8f0-ae71-40d9-8791-9b50dcbff133" 00:14:09.101 ], 00:14:09.101 "product_name": "Malloc disk", 00:14:09.101 "block_size": 512, 00:14:09.101 "num_blocks": 65536, 00:14:09.101 "uuid": "0f64f8f0-ae71-40d9-8791-9b50dcbff133", 00:14:09.101 "assigned_rate_limits": { 00:14:09.101 "rw_ios_per_sec": 0, 00:14:09.101 "rw_mbytes_per_sec": 0, 00:14:09.101 "r_mbytes_per_sec": 0, 00:14:09.101 "w_mbytes_per_sec": 0 00:14:09.101 }, 00:14:09.101 "claimed": false, 00:14:09.101 "zoned": false, 00:14:09.101 "supported_io_types": { 00:14:09.101 "read": true, 00:14:09.101 "write": true, 00:14:09.101 "unmap": true, 00:14:09.101 "write_zeroes": true, 00:14:09.101 "flush": true, 00:14:09.101 "reset": true, 00:14:09.101 "compare": false, 00:14:09.101 "compare_and_write": false, 00:14:09.101 "abort": true, 00:14:09.101 "nvme_admin": false, 00:14:09.101 "nvme_io": false 00:14:09.101 }, 00:14:09.101 "memory_domains": [ 00:14:09.101 { 00:14:09.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.101 "dma_device_type": 2 00:14:09.101 } 00:14:09.101 ], 00:14:09.101 "driver_specific": {} 00:14:09.101 } 00:14:09.102 ] 00:14:09.102 20:40:52 -- common/autotest_common.sh@895 -- # return 0 00:14:09.102 20:40:52 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:09.360 [2024-04-15 20:40:52.722396] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.360 [2024-04-15 20:40:52.723813] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.360 [2024-04-15 20:40:52.723870] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.360 [2024-04-15 20:40:52.723880] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.360 [2024-04-15 20:40:52.723901] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.360 20:40:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.619 20:40:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.619 "name": "Existed_Raid", 00:14:09.619 "uuid": "8fb5b3a2-3820-4bf4-9eb8-b8528bc4c4df", 00:14:09.619 "strip_size_kb": 0, 00:14:09.619 "state": "configuring", 00:14:09.619 "raid_level": "raid1", 00:14:09.619 "superblock": true, 00:14:09.619 "num_base_bdevs": 3, 00:14:09.619 "num_base_bdevs_discovered": 1, 00:14:09.619 "num_base_bdevs_operational": 3, 00:14:09.619 "base_bdevs_list": [ 00:14:09.619 { 00:14:09.619 "name": "BaseBdev1", 00:14:09.619 "uuid": "0f64f8f0-ae71-40d9-8791-9b50dcbff133", 00:14:09.619 "is_configured": true, 00:14:09.619 "data_offset": 2048, 00:14:09.619 "data_size": 63488 00:14:09.619 }, 00:14:09.619 { 00:14:09.619 "name": "BaseBdev2", 00:14:09.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.619 "is_configured": false, 00:14:09.619 "data_offset": 0, 00:14:09.619 "data_size": 0 00:14:09.619 }, 00:14:09.619 { 00:14:09.619 "name": "BaseBdev3", 00:14:09.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.619 "is_configured": false, 00:14:09.619 "data_offset": 0, 00:14:09.619 "data_size": 0 00:14:09.619 } 00:14:09.619 ] 00:14:09.619 }' 00:14:09.619 20:40:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.619 20:40:52 -- common/autotest_common.sh@10 -- # set +x 00:14:10.186 20:40:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.186 BaseBdev2 00:14:10.186 [2024-04-15 20:40:53.600799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.186 20:40:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:10.186 20:40:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:10.186 20:40:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:10.186 20:40:53 -- common/autotest_common.sh@889 -- # local i 00:14:10.186 20:40:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:10.186 20:40:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:10.186 20:40:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.445 20:40:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.445 [ 00:14:10.445 { 00:14:10.445 "name": "BaseBdev2", 00:14:10.445 "aliases": [ 00:14:10.445 "a1e0aaeb-9c51-458a-8785-73f24a2b398f" 00:14:10.445 ], 00:14:10.445 "product_name": "Malloc disk", 00:14:10.445 "block_size": 512, 00:14:10.445 "num_blocks": 65536, 00:14:10.445 "uuid": "a1e0aaeb-9c51-458a-8785-73f24a2b398f", 00:14:10.445 "assigned_rate_limits": { 00:14:10.445 "rw_ios_per_sec": 0, 00:14:10.445 "rw_mbytes_per_sec": 0, 00:14:10.445 "r_mbytes_per_sec": 0, 00:14:10.445 "w_mbytes_per_sec": 0 00:14:10.445 }, 00:14:10.445 "claimed": true, 00:14:10.445 "claim_type": "exclusive_write", 00:14:10.445 "zoned": false, 00:14:10.445 "supported_io_types": { 00:14:10.445 "read": true, 00:14:10.445 "write": true, 00:14:10.445 "unmap": true, 00:14:10.445 "write_zeroes": true, 00:14:10.445 "flush": true, 00:14:10.445 "reset": true, 00:14:10.445 "compare": false, 00:14:10.445 "compare_and_write": false, 00:14:10.445 "abort": true, 00:14:10.445 "nvme_admin": false, 00:14:10.445 "nvme_io": false 00:14:10.445 }, 00:14:10.445 "memory_domains": [ 00:14:10.445 { 00:14:10.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.445 "dma_device_type": 2 00:14:10.445 } 00:14:10.445 ], 00:14:10.445 "driver_specific": {} 00:14:10.445 } 00:14:10.445 ] 00:14:10.445 20:40:53 -- common/autotest_common.sh@895 -- # return 0 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.445 20:40:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.704 20:40:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.704 "name": "Existed_Raid", 00:14:10.704 "uuid": "8fb5b3a2-3820-4bf4-9eb8-b8528bc4c4df", 00:14:10.704 "strip_size_kb": 0, 00:14:10.704 "state": "configuring", 00:14:10.704 "raid_level": "raid1", 00:14:10.704 "superblock": true, 00:14:10.704 "num_base_bdevs": 3, 00:14:10.704 "num_base_bdevs_discovered": 2, 00:14:10.704 "num_base_bdevs_operational": 3, 00:14:10.704 "base_bdevs_list": [ 00:14:10.704 { 00:14:10.704 "name": "BaseBdev1", 00:14:10.704 "uuid": "0f64f8f0-ae71-40d9-8791-9b50dcbff133", 00:14:10.704 "is_configured": true, 00:14:10.704 "data_offset": 2048, 00:14:10.704 "data_size": 63488 00:14:10.704 }, 00:14:10.704 { 00:14:10.704 "name": "BaseBdev2", 00:14:10.704 "uuid": "a1e0aaeb-9c51-458a-8785-73f24a2b398f", 00:14:10.704 "is_configured": true, 00:14:10.704 "data_offset": 2048, 00:14:10.704 "data_size": 63488 00:14:10.704 }, 00:14:10.704 { 00:14:10.704 "name": "BaseBdev3", 00:14:10.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.704 "is_configured": false, 00:14:10.704 "data_offset": 0, 00:14:10.704 "data_size": 0 00:14:10.704 } 00:14:10.704 ] 00:14:10.704 }' 00:14:10.704 20:40:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.704 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:14:11.269 20:40:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:11.528 BaseBdev3 00:14:11.528 [2024-04-15 20:40:54.870774] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.528 [2024-04-15 20:40:54.870912] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:14:11.528 [2024-04-15 20:40:54.870924] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:11.528 [2024-04-15 20:40:54.871007] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:11.528 [2024-04-15 20:40:54.871201] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:14:11.528 [2024-04-15 20:40:54.871212] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:14:11.528 [2024-04-15 20:40:54.871315] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.528 20:40:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:11.528 20:40:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:14:11.528 20:40:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:11.528 20:40:54 -- common/autotest_common.sh@889 -- # local i 00:14:11.528 20:40:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:11.528 20:40:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:11.528 20:40:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.787 20:40:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:11.787 [ 00:14:11.787 { 00:14:11.787 "name": "BaseBdev3", 00:14:11.787 "aliases": [ 00:14:11.787 "2f6bbb62-5759-4c79-ab9a-059b5a71398a" 00:14:11.787 ], 00:14:11.787 "product_name": "Malloc disk", 00:14:11.787 "block_size": 512, 00:14:11.787 "num_blocks": 65536, 00:14:11.787 "uuid": "2f6bbb62-5759-4c79-ab9a-059b5a71398a", 00:14:11.787 "assigned_rate_limits": { 00:14:11.787 "rw_ios_per_sec": 0, 00:14:11.787 "rw_mbytes_per_sec": 0, 00:14:11.787 "r_mbytes_per_sec": 0, 00:14:11.787 "w_mbytes_per_sec": 0 00:14:11.787 }, 00:14:11.787 "claimed": true, 00:14:11.787 "claim_type": "exclusive_write", 00:14:11.787 "zoned": false, 00:14:11.787 "supported_io_types": { 00:14:11.787 "read": true, 00:14:11.787 "write": true, 00:14:11.787 "unmap": true, 00:14:11.787 "write_zeroes": true, 00:14:11.787 "flush": true, 00:14:11.787 "reset": true, 00:14:11.787 "compare": false, 00:14:11.787 "compare_and_write": false, 00:14:11.787 "abort": true, 00:14:11.788 "nvme_admin": false, 00:14:11.788 "nvme_io": false 00:14:11.788 }, 00:14:11.788 "memory_domains": [ 00:14:11.788 { 00:14:11.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.788 "dma_device_type": 2 00:14:11.788 } 00:14:11.788 ], 00:14:11.788 "driver_specific": {} 00:14:11.788 } 00:14:11.788 ] 00:14:11.788 20:40:55 -- common/autotest_common.sh@895 -- # return 0 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.788 20:40:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.047 20:40:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.047 "name": "Existed_Raid", 00:14:12.047 "uuid": "8fb5b3a2-3820-4bf4-9eb8-b8528bc4c4df", 00:14:12.047 "strip_size_kb": 0, 00:14:12.047 "state": "online", 00:14:12.047 "raid_level": "raid1", 00:14:12.047 "superblock": true, 00:14:12.047 "num_base_bdevs": 3, 00:14:12.047 "num_base_bdevs_discovered": 3, 00:14:12.047 "num_base_bdevs_operational": 3, 00:14:12.047 "base_bdevs_list": [ 00:14:12.047 { 00:14:12.047 "name": "BaseBdev1", 00:14:12.047 "uuid": "0f64f8f0-ae71-40d9-8791-9b50dcbff133", 00:14:12.047 "is_configured": true, 00:14:12.047 "data_offset": 2048, 00:14:12.047 "data_size": 63488 00:14:12.047 }, 00:14:12.047 { 00:14:12.047 "name": "BaseBdev2", 00:14:12.047 "uuid": "a1e0aaeb-9c51-458a-8785-73f24a2b398f", 00:14:12.047 "is_configured": true, 00:14:12.047 "data_offset": 2048, 00:14:12.047 "data_size": 63488 00:14:12.047 }, 00:14:12.047 { 00:14:12.047 "name": "BaseBdev3", 00:14:12.047 "uuid": "2f6bbb62-5759-4c79-ab9a-059b5a71398a", 00:14:12.047 "is_configured": true, 00:14:12.047 "data_offset": 2048, 00:14:12.047 "data_size": 63488 00:14:12.047 } 00:14:12.047 ] 00:14:12.047 }' 00:14:12.047 20:40:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.047 20:40:55 -- common/autotest_common.sh@10 -- # set +x 00:14:12.650 20:40:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:12.910 [2024-04-15 20:40:56.185129] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.910 20:40:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.169 20:40:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.169 "name": "Existed_Raid", 00:14:13.169 "uuid": "8fb5b3a2-3820-4bf4-9eb8-b8528bc4c4df", 00:14:13.169 "strip_size_kb": 0, 00:14:13.169 "state": "online", 00:14:13.169 "raid_level": "raid1", 00:14:13.169 "superblock": true, 00:14:13.169 "num_base_bdevs": 3, 00:14:13.169 "num_base_bdevs_discovered": 2, 00:14:13.169 "num_base_bdevs_operational": 2, 00:14:13.169 "base_bdevs_list": [ 00:14:13.169 { 00:14:13.169 "name": null, 00:14:13.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.169 "is_configured": false, 00:14:13.169 "data_offset": 2048, 00:14:13.169 "data_size": 63488 00:14:13.169 }, 00:14:13.169 { 00:14:13.169 "name": "BaseBdev2", 00:14:13.169 "uuid": "a1e0aaeb-9c51-458a-8785-73f24a2b398f", 00:14:13.169 "is_configured": true, 00:14:13.169 "data_offset": 2048, 00:14:13.169 "data_size": 63488 00:14:13.169 }, 00:14:13.169 { 00:14:13.169 "name": "BaseBdev3", 00:14:13.169 "uuid": "2f6bbb62-5759-4c79-ab9a-059b5a71398a", 00:14:13.169 "is_configured": true, 00:14:13.169 "data_offset": 2048, 00:14:13.169 "data_size": 63488 00:14:13.169 } 00:14:13.169 ] 00:14:13.169 }' 00:14:13.169 20:40:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.169 20:40:56 -- common/autotest_common.sh@10 -- # set +x 00:14:13.738 20:40:56 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:13.738 20:40:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:13.738 20:40:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.738 20:40:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:13.738 20:40:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:13.738 20:40:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.738 20:40:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:13.997 [2024-04-15 20:40:57.316565] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.997 20:40:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:13.997 20:40:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:13.997 20:40:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.997 20:40:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:14.256 20:40:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:14.256 20:40:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:14.256 20:40:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:14.515 [2024-04-15 20:40:57.772549] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:14.515 [2024-04-15 20:40:57.772582] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:14.515 [2024-04-15 20:40:57.772619] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:14.515 [2024-04-15 20:40:57.861481] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:14.515 [2024-04-15 20:40:57.861524] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:14:14.515 20:40:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:14.515 20:40:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:14.515 20:40:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.515 20:40:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:14.775 20:40:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:14.775 20:40:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:14.775 20:40:58 -- bdev/bdev_raid.sh@287 -- # killprocess 51498 00:14:14.775 20:40:58 -- common/autotest_common.sh@926 -- # '[' -z 51498 ']' 00:14:14.775 20:40:58 -- common/autotest_common.sh@930 -- # kill -0 51498 00:14:14.775 20:40:58 -- common/autotest_common.sh@931 -- # uname 00:14:14.775 20:40:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:14.775 20:40:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 51498 00:14:14.775 killing process with pid 51498 00:14:14.775 20:40:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:14.775 20:40:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:14.775 20:40:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51498' 00:14:14.775 20:40:58 -- common/autotest_common.sh@945 -- # kill 51498 00:14:14.775 20:40:58 -- common/autotest_common.sh@950 -- # wait 51498 00:14:14.775 [2024-04-15 20:40:58.093112] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:14.775 [2024-04-15 20:40:58.093211] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:16.154 ************************************ 00:14:16.154 END TEST raid_state_function_test_sb 00:14:16.154 ************************************ 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:16.154 00:14:16.154 real 0m11.866s 00:14:16.154 user 0m20.007s 00:14:16.154 sys 0m1.429s 00:14:16.154 20:40:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.154 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:14:16.154 20:40:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:16.154 20:40:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:16.154 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:14:16.154 ************************************ 00:14:16.154 START TEST raid_superblock_test 00:14:16.154 ************************************ 00:14:16.154 20:40:59 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:14:16.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=51881 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 51881 /var/tmp/spdk-raid.sock 00:14:16.154 20:40:59 -- common/autotest_common.sh@819 -- # '[' -z 51881 ']' 00:14:16.154 20:40:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:16.154 20:40:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:16.154 20:40:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:16.154 20:40:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:16.154 20:40:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:16.154 20:40:59 -- common/autotest_common.sh@10 -- # set +x 00:14:16.154 [2024-04-15 20:40:59.563805] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:16.154 [2024-04-15 20:40:59.563958] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51881 ] 00:14:16.413 [2024-04-15 20:40:59.736635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.672 [2024-04-15 20:40:59.927724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.672 [2024-04-15 20:41:00.124501] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.609 20:41:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:17.609 20:41:00 -- common/autotest_common.sh@852 -- # return 0 00:14:17.609 20:41:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:17.609 20:41:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:17.609 20:41:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:17.609 20:41:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:17.609 20:41:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:17.609 20:41:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:17.609 20:41:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:17.609 20:41:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:17.609 20:41:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:17.867 malloc1 00:14:17.867 20:41:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:17.867 [2024-04-15 20:41:01.319325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:17.867 [2024-04-15 20:41:01.319406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.867 [2024-04-15 20:41:01.319450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:14:17.868 [2024-04-15 20:41:01.319487] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.868 [2024-04-15 20:41:01.321033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.868 [2024-04-15 20:41:01.321070] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:17.868 pt1 00:14:17.868 20:41:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:17.868 20:41:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:17.868 20:41:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:17.868 20:41:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:17.868 20:41:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:17.868 20:41:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:17.868 20:41:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:17.868 20:41:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:17.868 20:41:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:18.127 malloc2 00:14:18.127 20:41:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:18.385 [2024-04-15 20:41:01.656707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:18.385 [2024-04-15 20:41:01.656776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.385 [2024-04-15 20:41:01.656814] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:14:18.385 [2024-04-15 20:41:01.656846] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.385 [2024-04-15 20:41:01.658369] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.385 [2024-04-15 20:41:01.658403] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:18.385 pt2 00:14:18.385 20:41:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:18.385 20:41:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:18.385 20:41:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:14:18.385 20:41:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:14:18.385 20:41:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:18.385 20:41:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:18.385 20:41:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:18.385 20:41:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:18.385 20:41:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:18.385 malloc3 00:14:18.644 20:41:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:18.644 [2024-04-15 20:41:02.049442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:18.644 [2024-04-15 20:41:02.049513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.644 [2024-04-15 20:41:02.049557] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:14:18.644 [2024-04-15 20:41:02.049591] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.644 [2024-04-15 20:41:02.051082] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.644 [2024-04-15 20:41:02.051130] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:18.644 pt3 00:14:18.644 20:41:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:18.644 20:41:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:18.644 20:41:02 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:18.918 [2024-04-15 20:41:02.217279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:18.918 [2024-04-15 20:41:02.220287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:18.918 [2024-04-15 20:41:02.220465] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:18.918 [2024-04-15 20:41:02.220931] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002c180 00:14:18.918 [2024-04-15 20:41:02.220986] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:18.918 [2024-04-15 20:41:02.221438] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:14:18.918 [2024-04-15 20:41:02.222282] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002c180 00:14:18.918 [2024-04-15 20:41:02.222351] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002c180 00:14:18.918 [2024-04-15 20:41:02.222908] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.918 "name": "raid_bdev1", 00:14:18.918 "uuid": "034d2085-aa8c-43a5-9fab-ee6236ff3069", 00:14:18.918 "strip_size_kb": 0, 00:14:18.918 "state": "online", 00:14:18.918 "raid_level": "raid1", 00:14:18.918 "superblock": true, 00:14:18.918 "num_base_bdevs": 3, 00:14:18.918 "num_base_bdevs_discovered": 3, 00:14:18.918 "num_base_bdevs_operational": 3, 00:14:18.918 "base_bdevs_list": [ 00:14:18.918 { 00:14:18.918 "name": "pt1", 00:14:18.918 "uuid": "27a95341-c5c4-5251-aff2-796732ea0b24", 00:14:18.918 "is_configured": true, 00:14:18.918 "data_offset": 2048, 00:14:18.918 "data_size": 63488 00:14:18.918 }, 00:14:18.918 { 00:14:18.918 "name": "pt2", 00:14:18.918 "uuid": "d34bc5ed-cd31-55bf-a133-e002ee1bd7a8", 00:14:18.918 "is_configured": true, 00:14:18.918 "data_offset": 2048, 00:14:18.918 "data_size": 63488 00:14:18.918 }, 00:14:18.918 { 00:14:18.918 "name": "pt3", 00:14:18.918 "uuid": "01163f3f-cf33-578d-a1c0-187837d2c529", 00:14:18.918 "is_configured": true, 00:14:18.918 "data_offset": 2048, 00:14:18.918 "data_size": 63488 00:14:18.918 } 00:14:18.918 ] 00:14:18.918 }' 00:14:18.918 20:41:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.918 20:41:02 -- common/autotest_common.sh@10 -- # set +x 00:14:19.493 20:41:02 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:19.493 20:41:02 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:19.752 [2024-04-15 20:41:03.009430] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.752 20:41:03 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=034d2085-aa8c-43a5-9fab-ee6236ff3069 00:14:19.752 20:41:03 -- bdev/bdev_raid.sh@380 -- # '[' -z 034d2085-aa8c-43a5-9fab-ee6236ff3069 ']' 00:14:19.752 20:41:03 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:19.752 [2024-04-15 20:41:03.177112] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.752 [2024-04-15 20:41:03.177142] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.752 [2024-04-15 20:41:03.177204] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.752 [2024-04-15 20:41:03.177246] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.752 [2024-04-15 20:41:03.177255] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c180 name raid_bdev1, state offline 00:14:19.753 20:41:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.753 20:41:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:20.011 20:41:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:20.011 20:41:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:20.011 20:41:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:20.011 20:41:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:20.270 20:41:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:20.270 20:41:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:20.270 20:41:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:20.270 20:41:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:20.530 20:41:03 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:20.530 20:41:03 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:20.530 20:41:03 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:20.530 20:41:03 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:20.530 20:41:03 -- common/autotest_common.sh@640 -- # local es=0 00:14:20.530 20:41:03 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:20.530 20:41:03 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.530 20:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:20.530 20:41:03 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.530 20:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:20.530 20:41:03 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.530 20:41:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:20.530 20:41:03 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.530 20:41:03 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:20.530 20:41:03 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:20.792 [2024-04-15 20:41:04.139706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:20.792 [2024-04-15 20:41:04.141049] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:20.792 [2024-04-15 20:41:04.141107] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:20.792 [2024-04-15 20:41:04.141139] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:20.792 [2024-04-15 20:41:04.141205] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:20.792 [2024-04-15 20:41:04.141235] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:14:20.792 [2024-04-15 20:41:04.141271] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.792 [2024-04-15 20:41:04.141281] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002c780 name raid_bdev1, state configuring 00:14:20.792 request: 00:14:20.792 { 00:14:20.792 "name": "raid_bdev1", 00:14:20.792 "raid_level": "raid1", 00:14:20.792 "base_bdevs": [ 00:14:20.792 "malloc1", 00:14:20.792 "malloc2", 00:14:20.792 "malloc3" 00:14:20.792 ], 00:14:20.792 "superblock": false, 00:14:20.792 "method": "bdev_raid_create", 00:14:20.792 "req_id": 1 00:14:20.792 } 00:14:20.792 Got JSON-RPC error response 00:14:20.792 response: 00:14:20.792 { 00:14:20.792 "code": -17, 00:14:20.792 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:20.792 } 00:14:20.792 20:41:04 -- common/autotest_common.sh@643 -- # es=1 00:14:20.792 20:41:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:20.792 20:41:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:20.792 20:41:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:20.792 20:41:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:20.792 20:41:04 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.050 20:41:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:21.051 [2024-04-15 20:41:04.463177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:21.051 [2024-04-15 20:41:04.463252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.051 [2024-04-15 20:41:04.463295] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002d980 00:14:21.051 [2024-04-15 20:41:04.463318] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.051 [2024-04-15 20:41:04.464892] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.051 [2024-04-15 20:41:04.464935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:21.051 [2024-04-15 20:41:04.465035] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:21.051 [2024-04-15 20:41:04.465089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:21.051 pt1 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.051 20:41:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.309 20:41:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:21.309 "name": "raid_bdev1", 00:14:21.309 "uuid": "034d2085-aa8c-43a5-9fab-ee6236ff3069", 00:14:21.309 "strip_size_kb": 0, 00:14:21.309 "state": "configuring", 00:14:21.309 "raid_level": "raid1", 00:14:21.309 "superblock": true, 00:14:21.309 "num_base_bdevs": 3, 00:14:21.309 "num_base_bdevs_discovered": 1, 00:14:21.309 "num_base_bdevs_operational": 3, 00:14:21.309 "base_bdevs_list": [ 00:14:21.309 { 00:14:21.309 "name": "pt1", 00:14:21.309 "uuid": "27a95341-c5c4-5251-aff2-796732ea0b24", 00:14:21.309 "is_configured": true, 00:14:21.309 "data_offset": 2048, 00:14:21.309 "data_size": 63488 00:14:21.309 }, 00:14:21.309 { 00:14:21.309 "name": null, 00:14:21.309 "uuid": "d34bc5ed-cd31-55bf-a133-e002ee1bd7a8", 00:14:21.309 "is_configured": false, 00:14:21.309 "data_offset": 2048, 00:14:21.309 "data_size": 63488 00:14:21.309 }, 00:14:21.309 { 00:14:21.309 "name": null, 00:14:21.309 "uuid": "01163f3f-cf33-578d-a1c0-187837d2c529", 00:14:21.309 "is_configured": false, 00:14:21.309 "data_offset": 2048, 00:14:21.309 "data_size": 63488 00:14:21.309 } 00:14:21.309 ] 00:14:21.309 }' 00:14:21.309 20:41:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:21.309 20:41:04 -- common/autotest_common.sh@10 -- # set +x 00:14:21.877 20:41:05 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:14:21.877 20:41:05 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:21.877 [2024-04-15 20:41:05.262005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:21.877 [2024-04-15 20:41:05.262098] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.877 [2024-04-15 20:41:05.262159] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002f480 00:14:21.877 [2024-04-15 20:41:05.262183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.877 [2024-04-15 20:41:05.262525] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.877 [2024-04-15 20:41:05.262550] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:21.877 pt2 00:14:21.877 [2024-04-15 20:41:05.263081] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:21.877 [2024-04-15 20:41:05.263104] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:21.877 20:41:05 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:22.136 [2024-04-15 20:41:05.409778] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.136 "name": "raid_bdev1", 00:14:22.136 "uuid": "034d2085-aa8c-43a5-9fab-ee6236ff3069", 00:14:22.136 "strip_size_kb": 0, 00:14:22.136 "state": "configuring", 00:14:22.136 "raid_level": "raid1", 00:14:22.136 "superblock": true, 00:14:22.136 "num_base_bdevs": 3, 00:14:22.136 "num_base_bdevs_discovered": 1, 00:14:22.136 "num_base_bdevs_operational": 3, 00:14:22.136 "base_bdevs_list": [ 00:14:22.136 { 00:14:22.136 "name": "pt1", 00:14:22.136 "uuid": "27a95341-c5c4-5251-aff2-796732ea0b24", 00:14:22.136 "is_configured": true, 00:14:22.136 "data_offset": 2048, 00:14:22.136 "data_size": 63488 00:14:22.136 }, 00:14:22.136 { 00:14:22.136 "name": null, 00:14:22.136 "uuid": "d34bc5ed-cd31-55bf-a133-e002ee1bd7a8", 00:14:22.136 "is_configured": false, 00:14:22.136 "data_offset": 2048, 00:14:22.136 "data_size": 63488 00:14:22.136 }, 00:14:22.136 { 00:14:22.136 "name": null, 00:14:22.136 "uuid": "01163f3f-cf33-578d-a1c0-187837d2c529", 00:14:22.136 "is_configured": false, 00:14:22.136 "data_offset": 2048, 00:14:22.136 "data_size": 63488 00:14:22.136 } 00:14:22.136 ] 00:14:22.136 }' 00:14:22.136 20:41:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.136 20:41:05 -- common/autotest_common.sh@10 -- # set +x 00:14:22.704 20:41:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:22.704 20:41:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:22.704 20:41:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:22.963 [2024-04-15 20:41:06.236476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:22.963 [2024-04-15 20:41:06.236552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.963 [2024-04-15 20:41:06.236591] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000030c80 00:14:22.963 [2024-04-15 20:41:06.236627] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.963 [2024-04-15 20:41:06.237100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.963 [2024-04-15 20:41:06.237136] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:22.963 [2024-04-15 20:41:06.237229] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:22.963 [2024-04-15 20:41:06.237248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.963 pt2 00:14:22.963 20:41:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:22.963 20:41:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:22.963 20:41:06 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:22.963 [2024-04-15 20:41:06.388235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:22.963 [2024-04-15 20:41:06.388299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.963 [2024-04-15 20:41:06.388330] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032180 00:14:22.963 [2024-04-15 20:41:06.388353] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.963 [2024-04-15 20:41:06.388595] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.963 [2024-04-15 20:41:06.388621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:22.963 [2024-04-15 20:41:06.388908] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:22.963 [2024-04-15 20:41:06.388935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:22.963 [2024-04-15 20:41:06.389010] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002ee80 00:14:22.963 [2024-04-15 20:41:06.389019] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:22.963 [2024-04-15 20:41:06.389091] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:22.963 [2024-04-15 20:41:06.389261] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002ee80 00:14:22.963 [2024-04-15 20:41:06.389271] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002ee80 00:14:22.963 [2024-04-15 20:41:06.389351] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.963 pt3 00:14:22.963 20:41:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:22.963 20:41:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:22.963 20:41:06 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.963 20:41:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:22.964 20:41:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:22.964 20:41:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:22.964 20:41:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:22.964 20:41:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:22.964 20:41:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.964 20:41:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.964 20:41:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.964 20:41:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.964 20:41:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.964 20:41:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.223 20:41:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:23.223 "name": "raid_bdev1", 00:14:23.223 "uuid": "034d2085-aa8c-43a5-9fab-ee6236ff3069", 00:14:23.223 "strip_size_kb": 0, 00:14:23.223 "state": "online", 00:14:23.223 "raid_level": "raid1", 00:14:23.223 "superblock": true, 00:14:23.223 "num_base_bdevs": 3, 00:14:23.223 "num_base_bdevs_discovered": 3, 00:14:23.223 "num_base_bdevs_operational": 3, 00:14:23.223 "base_bdevs_list": [ 00:14:23.223 { 00:14:23.223 "name": "pt1", 00:14:23.223 "uuid": "27a95341-c5c4-5251-aff2-796732ea0b24", 00:14:23.223 "is_configured": true, 00:14:23.223 "data_offset": 2048, 00:14:23.223 "data_size": 63488 00:14:23.223 }, 00:14:23.223 { 00:14:23.223 "name": "pt2", 00:14:23.223 "uuid": "d34bc5ed-cd31-55bf-a133-e002ee1bd7a8", 00:14:23.223 "is_configured": true, 00:14:23.223 "data_offset": 2048, 00:14:23.223 "data_size": 63488 00:14:23.223 }, 00:14:23.223 { 00:14:23.223 "name": "pt3", 00:14:23.223 "uuid": "01163f3f-cf33-578d-a1c0-187837d2c529", 00:14:23.223 "is_configured": true, 00:14:23.223 "data_offset": 2048, 00:14:23.223 "data_size": 63488 00:14:23.223 } 00:14:23.223 ] 00:14:23.223 }' 00:14:23.223 20:41:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:23.223 20:41:06 -- common/autotest_common.sh@10 -- # set +x 00:14:23.792 20:41:07 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:23.792 20:41:07 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:24.051 [2024-04-15 20:41:07.350896] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@430 -- # '[' 034d2085-aa8c-43a5-9fab-ee6236ff3069 '!=' 034d2085-aa8c-43a5-9fab-ee6236ff3069 ']' 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:24.051 [2024-04-15 20:41:07.518554] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.051 20:41:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.310 20:41:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:24.310 "name": "raid_bdev1", 00:14:24.310 "uuid": "034d2085-aa8c-43a5-9fab-ee6236ff3069", 00:14:24.310 "strip_size_kb": 0, 00:14:24.310 "state": "online", 00:14:24.310 "raid_level": "raid1", 00:14:24.310 "superblock": true, 00:14:24.310 "num_base_bdevs": 3, 00:14:24.310 "num_base_bdevs_discovered": 2, 00:14:24.310 "num_base_bdevs_operational": 2, 00:14:24.310 "base_bdevs_list": [ 00:14:24.310 { 00:14:24.310 "name": null, 00:14:24.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.310 "is_configured": false, 00:14:24.310 "data_offset": 2048, 00:14:24.310 "data_size": 63488 00:14:24.310 }, 00:14:24.310 { 00:14:24.310 "name": "pt2", 00:14:24.310 "uuid": "d34bc5ed-cd31-55bf-a133-e002ee1bd7a8", 00:14:24.310 "is_configured": true, 00:14:24.310 "data_offset": 2048, 00:14:24.310 "data_size": 63488 00:14:24.310 }, 00:14:24.310 { 00:14:24.310 "name": "pt3", 00:14:24.310 "uuid": "01163f3f-cf33-578d-a1c0-187837d2c529", 00:14:24.311 "is_configured": true, 00:14:24.311 "data_offset": 2048, 00:14:24.311 "data_size": 63488 00:14:24.311 } 00:14:24.311 ] 00:14:24.311 }' 00:14:24.311 20:41:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:24.311 20:41:07 -- common/autotest_common.sh@10 -- # set +x 00:14:24.879 20:41:08 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:24.879 [2024-04-15 20:41:08.341293] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.879 [2024-04-15 20:41:08.341328] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.879 [2024-04-15 20:41:08.341374] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.879 [2024-04-15 20:41:08.341407] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.879 [2024-04-15 20:41:08.341416] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002ee80 name raid_bdev1, state offline 00:14:24.879 20:41:08 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.879 20:41:08 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:14:25.137 20:41:08 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:14:25.137 20:41:08 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:14:25.137 20:41:08 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:14:25.137 20:41:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:25.137 20:41:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:25.396 20:41:08 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:14:25.396 20:41:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:25.396 20:41:08 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:25.656 20:41:08 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:14:25.656 20:41:08 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:14:25.656 20:41:08 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:14:25.656 20:41:08 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:14:25.656 20:41:08 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:25.656 [2024-04-15 20:41:09.064214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:25.656 [2024-04-15 20:41:09.064286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.656 [2024-04-15 20:41:09.064331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000033680 00:14:25.656 [2024-04-15 20:41:09.064352] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.656 [2024-04-15 20:41:09.065916] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.656 [2024-04-15 20:41:09.065954] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:25.656 [2024-04-15 20:41:09.066040] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:25.656 [2024-04-15 20:41:09.066094] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:25.656 pt2 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.656 20:41:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.915 20:41:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.915 "name": "raid_bdev1", 00:14:25.915 "uuid": "034d2085-aa8c-43a5-9fab-ee6236ff3069", 00:14:25.915 "strip_size_kb": 0, 00:14:25.915 "state": "configuring", 00:14:25.915 "raid_level": "raid1", 00:14:25.915 "superblock": true, 00:14:25.915 "num_base_bdevs": 3, 00:14:25.915 "num_base_bdevs_discovered": 1, 00:14:25.915 "num_base_bdevs_operational": 2, 00:14:25.915 "base_bdevs_list": [ 00:14:25.915 { 00:14:25.915 "name": null, 00:14:25.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.915 "is_configured": false, 00:14:25.915 "data_offset": 2048, 00:14:25.915 "data_size": 63488 00:14:25.915 }, 00:14:25.915 { 00:14:25.915 "name": "pt2", 00:14:25.915 "uuid": "d34bc5ed-cd31-55bf-a133-e002ee1bd7a8", 00:14:25.915 "is_configured": true, 00:14:25.915 "data_offset": 2048, 00:14:25.915 "data_size": 63488 00:14:25.915 }, 00:14:25.915 { 00:14:25.915 "name": null, 00:14:25.915 "uuid": "01163f3f-cf33-578d-a1c0-187837d2c529", 00:14:25.915 "is_configured": false, 00:14:25.915 "data_offset": 2048, 00:14:25.915 "data_size": 63488 00:14:25.915 } 00:14:25.915 ] 00:14:25.915 }' 00:14:25.915 20:41:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.915 20:41:09 -- common/autotest_common.sh@10 -- # set +x 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@462 -- # i=2 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:26.484 [2024-04-15 20:41:09.965134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:26.484 [2024-04-15 20:41:09.965228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.484 [2024-04-15 20:41:09.965271] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035180 00:14:26.484 [2024-04-15 20:41:09.965290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.484 [2024-04-15 20:41:09.965573] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.484 [2024-04-15 20:41:09.965596] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:26.484 [2024-04-15 20:41:09.965869] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:26.484 [2024-04-15 20:41:09.965903] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:26.484 [2024-04-15 20:41:09.965974] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000034b80 00:14:26.484 [2024-04-15 20:41:09.965984] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:26.484 [2024-04-15 20:41:09.966064] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:26.484 [2024-04-15 20:41:09.966237] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000034b80 00:14:26.484 [2024-04-15 20:41:09.966247] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000034b80 00:14:26.484 [2024-04-15 20:41:09.966327] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.484 pt3 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:26.484 20:41:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.743 20:41:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.743 20:41:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:26.743 "name": "raid_bdev1", 00:14:26.743 "uuid": "034d2085-aa8c-43a5-9fab-ee6236ff3069", 00:14:26.743 "strip_size_kb": 0, 00:14:26.743 "state": "online", 00:14:26.743 "raid_level": "raid1", 00:14:26.743 "superblock": true, 00:14:26.743 "num_base_bdevs": 3, 00:14:26.743 "num_base_bdevs_discovered": 2, 00:14:26.743 "num_base_bdevs_operational": 2, 00:14:26.743 "base_bdevs_list": [ 00:14:26.743 { 00:14:26.743 "name": null, 00:14:26.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.743 "is_configured": false, 00:14:26.743 "data_offset": 2048, 00:14:26.743 "data_size": 63488 00:14:26.743 }, 00:14:26.743 { 00:14:26.743 "name": "pt2", 00:14:26.743 "uuid": "d34bc5ed-cd31-55bf-a133-e002ee1bd7a8", 00:14:26.743 "is_configured": true, 00:14:26.743 "data_offset": 2048, 00:14:26.743 "data_size": 63488 00:14:26.743 }, 00:14:26.743 { 00:14:26.743 "name": "pt3", 00:14:26.743 "uuid": "01163f3f-cf33-578d-a1c0-187837d2c529", 00:14:26.743 "is_configured": true, 00:14:26.743 "data_offset": 2048, 00:14:26.743 "data_size": 63488 00:14:26.743 } 00:14:26.743 ] 00:14:26.743 }' 00:14:26.743 20:41:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:26.743 20:41:10 -- common/autotest_common.sh@10 -- # set +x 00:14:27.316 20:41:10 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:14:27.316 20:41:10 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:27.582 [2024-04-15 20:41:10.927654] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.582 [2024-04-15 20:41:10.927688] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.582 [2024-04-15 20:41:10.927735] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.582 [2024-04-15 20:41:10.927769] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.582 [2024-04-15 20:41:10.927778] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000034b80 name raid_bdev1, state offline 00:14:27.582 20:41:10 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.582 20:41:10 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:27.841 [2024-04-15 20:41:11.287128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:27.841 [2024-04-15 20:41:11.287204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.841 [2024-04-15 20:41:11.287261] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000036680 00:14:27.841 [2024-04-15 20:41:11.287280] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.841 [2024-04-15 20:41:11.288984] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.841 [2024-04-15 20:41:11.289023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:27.841 [2024-04-15 20:41:11.289119] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:27.841 [2024-04-15 20:41:11.289155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:27.841 pt1 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.841 20:41:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.100 20:41:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:28.100 "name": "raid_bdev1", 00:14:28.100 "uuid": "034d2085-aa8c-43a5-9fab-ee6236ff3069", 00:14:28.100 "strip_size_kb": 0, 00:14:28.100 "state": "configuring", 00:14:28.100 "raid_level": "raid1", 00:14:28.100 "superblock": true, 00:14:28.100 "num_base_bdevs": 3, 00:14:28.100 "num_base_bdevs_discovered": 1, 00:14:28.100 "num_base_bdevs_operational": 3, 00:14:28.100 "base_bdevs_list": [ 00:14:28.100 { 00:14:28.100 "name": "pt1", 00:14:28.100 "uuid": "27a95341-c5c4-5251-aff2-796732ea0b24", 00:14:28.100 "is_configured": true, 00:14:28.100 "data_offset": 2048, 00:14:28.100 "data_size": 63488 00:14:28.100 }, 00:14:28.100 { 00:14:28.100 "name": null, 00:14:28.100 "uuid": "d34bc5ed-cd31-55bf-a133-e002ee1bd7a8", 00:14:28.100 "is_configured": false, 00:14:28.100 "data_offset": 2048, 00:14:28.100 "data_size": 63488 00:14:28.100 }, 00:14:28.100 { 00:14:28.100 "name": null, 00:14:28.100 "uuid": "01163f3f-cf33-578d-a1c0-187837d2c529", 00:14:28.100 "is_configured": false, 00:14:28.100 "data_offset": 2048, 00:14:28.100 "data_size": 63488 00:14:28.100 } 00:14:28.100 ] 00:14:28.100 }' 00:14:28.100 20:41:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:28.100 20:41:11 -- common/autotest_common.sh@10 -- # set +x 00:14:28.668 20:41:12 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:14:28.668 20:41:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:14:28.668 20:41:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:28.925 20:41:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:14:28.925 20:41:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:14:28.925 20:41:12 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@489 -- # i=2 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:29.183 [2024-04-15 20:41:12.653233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:29.183 [2024-04-15 20:41:12.653363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.183 [2024-04-15 20:41:12.653415] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038180 00:14:29.183 [2024-04-15 20:41:12.653454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.183 [2024-04-15 20:41:12.656174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.183 [2024-04-15 20:41:12.656338] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:29.183 pt3 00:14:29.183 [2024-04-15 20:41:12.656685] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:14:29.183 [2024-04-15 20:41:12.656732] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:29.183 [2024-04-15 20:41:12.656758] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:29.183 [2024-04-15 20:41:12.656800] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000037b80 name raid_bdev1, state configuring 00:14:29.183 [2024-04-15 20:41:12.656960] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.183 20:41:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.442 20:41:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.442 "name": "raid_bdev1", 00:14:29.442 "uuid": "034d2085-aa8c-43a5-9fab-ee6236ff3069", 00:14:29.442 "strip_size_kb": 0, 00:14:29.442 "state": "configuring", 00:14:29.442 "raid_level": "raid1", 00:14:29.442 "superblock": true, 00:14:29.442 "num_base_bdevs": 3, 00:14:29.442 "num_base_bdevs_discovered": 1, 00:14:29.442 "num_base_bdevs_operational": 2, 00:14:29.442 "base_bdevs_list": [ 00:14:29.442 { 00:14:29.442 "name": null, 00:14:29.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.442 "is_configured": false, 00:14:29.442 "data_offset": 2048, 00:14:29.442 "data_size": 63488 00:14:29.442 }, 00:14:29.442 { 00:14:29.442 "name": null, 00:14:29.442 "uuid": "d34bc5ed-cd31-55bf-a133-e002ee1bd7a8", 00:14:29.442 "is_configured": false, 00:14:29.442 "data_offset": 2048, 00:14:29.442 "data_size": 63488 00:14:29.442 }, 00:14:29.442 { 00:14:29.442 "name": "pt3", 00:14:29.442 "uuid": "01163f3f-cf33-578d-a1c0-187837d2c529", 00:14:29.442 "is_configured": true, 00:14:29.442 "data_offset": 2048, 00:14:29.442 "data_size": 63488 00:14:29.442 } 00:14:29.442 ] 00:14:29.442 }' 00:14:29.442 20:41:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.442 20:41:12 -- common/autotest_common.sh@10 -- # set +x 00:14:30.009 20:41:13 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:14:30.009 20:41:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:14:30.009 20:41:13 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.275 [2024-04-15 20:41:13.631738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.275 [2024-04-15 20:41:13.631839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.275 [2024-04-15 20:41:13.631883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000039980 00:14:30.275 [2024-04-15 20:41:13.631912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.275 [2024-04-15 20:41:13.632222] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.275 [2024-04-15 20:41:13.632249] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.275 [2024-04-15 20:41:13.632329] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:30.275 [2024-04-15 20:41:13.632347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.275 [2024-04-15 20:41:13.632411] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000039380 00:14:30.275 [2024-04-15 20:41:13.632420] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:30.275 [2024-04-15 20:41:13.632503] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:30.275 [2024-04-15 20:41:13.633298] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000039380 00:14:30.275 [2024-04-15 20:41:13.633360] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000039380 00:14:30.275 [2024-04-15 20:41:13.633704] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.275 pt2 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.275 20:41:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.549 20:41:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.549 "name": "raid_bdev1", 00:14:30.549 "uuid": "034d2085-aa8c-43a5-9fab-ee6236ff3069", 00:14:30.549 "strip_size_kb": 0, 00:14:30.549 "state": "online", 00:14:30.549 "raid_level": "raid1", 00:14:30.549 "superblock": true, 00:14:30.549 "num_base_bdevs": 3, 00:14:30.549 "num_base_bdevs_discovered": 2, 00:14:30.549 "num_base_bdevs_operational": 2, 00:14:30.549 "base_bdevs_list": [ 00:14:30.549 { 00:14:30.549 "name": null, 00:14:30.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.549 "is_configured": false, 00:14:30.549 "data_offset": 2048, 00:14:30.549 "data_size": 63488 00:14:30.549 }, 00:14:30.549 { 00:14:30.549 "name": "pt2", 00:14:30.549 "uuid": "d34bc5ed-cd31-55bf-a133-e002ee1bd7a8", 00:14:30.549 "is_configured": true, 00:14:30.549 "data_offset": 2048, 00:14:30.549 "data_size": 63488 00:14:30.549 }, 00:14:30.549 { 00:14:30.550 "name": "pt3", 00:14:30.550 "uuid": "01163f3f-cf33-578d-a1c0-187837d2c529", 00:14:30.550 "is_configured": true, 00:14:30.550 "data_offset": 2048, 00:14:30.550 "data_size": 63488 00:14:30.550 } 00:14:30.550 ] 00:14:30.550 }' 00:14:30.550 20:41:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.550 20:41:13 -- common/autotest_common.sh@10 -- # set +x 00:14:31.116 20:41:14 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:31.116 20:41:14 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:14:31.116 [2024-04-15 20:41:14.494558] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.116 20:41:14 -- bdev/bdev_raid.sh@506 -- # '[' 034d2085-aa8c-43a5-9fab-ee6236ff3069 '!=' 034d2085-aa8c-43a5-9fab-ee6236ff3069 ']' 00:14:31.116 20:41:14 -- bdev/bdev_raid.sh@511 -- # killprocess 51881 00:14:31.116 20:41:14 -- common/autotest_common.sh@926 -- # '[' -z 51881 ']' 00:14:31.116 20:41:14 -- common/autotest_common.sh@930 -- # kill -0 51881 00:14:31.116 20:41:14 -- common/autotest_common.sh@931 -- # uname 00:14:31.116 20:41:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:31.116 20:41:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 51881 00:14:31.116 killing process with pid 51881 00:14:31.116 20:41:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:31.116 20:41:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:31.116 20:41:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 51881' 00:14:31.116 20:41:14 -- common/autotest_common.sh@945 -- # kill 51881 00:14:31.116 20:41:14 -- common/autotest_common.sh@950 -- # wait 51881 00:14:31.116 [2024-04-15 20:41:14.546558] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.116 [2024-04-15 20:41:14.546621] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.116 [2024-04-15 20:41:14.546666] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.116 [2024-04-15 20:41:14.546688] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000039380 name raid_bdev1, state offline 00:14:31.375 [2024-04-15 20:41:14.819991] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.748 20:41:16 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:32.748 ************************************ 00:14:32.748 END TEST raid_superblock_test 00:14:32.748 ************************************ 00:14:32.748 00:14:32.748 real 0m16.692s 00:14:32.748 user 0m29.621s 00:14:32.748 sys 0m2.014s 00:14:32.748 20:41:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.748 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:14:32.748 20:41:16 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:32.748 20:41:16 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:32.748 20:41:16 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:14:32.748 20:41:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:32.748 20:41:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:32.748 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:14:32.748 ************************************ 00:14:32.748 START TEST raid_state_function_test 00:14:32.748 ************************************ 00:14:32.748 20:41:16 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:14:32.748 20:41:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:32.749 Process raid pid: 52466 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=52466 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52466' 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52466 /var/tmp/spdk-raid.sock 00:14:32.749 20:41:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:32.749 20:41:16 -- common/autotest_common.sh@819 -- # '[' -z 52466 ']' 00:14:32.749 20:41:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:32.749 20:41:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:32.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:32.749 20:41:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:32.749 20:41:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:32.749 20:41:16 -- common/autotest_common.sh@10 -- # set +x 00:14:33.007 [2024-04-15 20:41:16.312480] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:33.007 [2024-04-15 20:41:16.312638] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.007 [2024-04-15 20:41:16.461141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.265 [2024-04-15 20:41:16.659106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.523 [2024-04-15 20:41:16.857929] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.461 20:41:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:34.462 20:41:17 -- common/autotest_common.sh@852 -- # return 0 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:34.462 [2024-04-15 20:41:17.873107] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.462 [2024-04-15 20:41:17.873180] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.462 [2024-04-15 20:41:17.873192] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.462 [2024-04-15 20:41:17.873209] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.462 [2024-04-15 20:41:17.873222] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.462 [2024-04-15 20:41:17.873264] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.462 [2024-04-15 20:41:17.873272] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:34.462 [2024-04-15 20:41:17.873293] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.462 20:41:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.722 20:41:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.722 "name": "Existed_Raid", 00:14:34.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.722 "strip_size_kb": 64, 00:14:34.722 "state": "configuring", 00:14:34.722 "raid_level": "raid0", 00:14:34.722 "superblock": false, 00:14:34.722 "num_base_bdevs": 4, 00:14:34.722 "num_base_bdevs_discovered": 0, 00:14:34.722 "num_base_bdevs_operational": 4, 00:14:34.722 "base_bdevs_list": [ 00:14:34.722 { 00:14:34.722 "name": "BaseBdev1", 00:14:34.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.722 "is_configured": false, 00:14:34.722 "data_offset": 0, 00:14:34.722 "data_size": 0 00:14:34.722 }, 00:14:34.722 { 00:14:34.722 "name": "BaseBdev2", 00:14:34.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.722 "is_configured": false, 00:14:34.722 "data_offset": 0, 00:14:34.722 "data_size": 0 00:14:34.722 }, 00:14:34.722 { 00:14:34.722 "name": "BaseBdev3", 00:14:34.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.722 "is_configured": false, 00:14:34.722 "data_offset": 0, 00:14:34.722 "data_size": 0 00:14:34.722 }, 00:14:34.722 { 00:14:34.722 "name": "BaseBdev4", 00:14:34.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.722 "is_configured": false, 00:14:34.722 "data_offset": 0, 00:14:34.722 "data_size": 0 00:14:34.722 } 00:14:34.722 ] 00:14:34.722 }' 00:14:34.722 20:41:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.722 20:41:18 -- common/autotest_common.sh@10 -- # set +x 00:14:35.291 20:41:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:35.291 [2024-04-15 20:41:18.707772] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.291 [2024-04-15 20:41:18.707830] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:14:35.291 20:41:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:35.549 [2024-04-15 20:41:18.987415] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.549 [2024-04-15 20:41:18.987485] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.549 [2024-04-15 20:41:18.987495] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.549 [2024-04-15 20:41:18.987524] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.549 [2024-04-15 20:41:18.987532] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.549 [2024-04-15 20:41:18.987553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.549 [2024-04-15 20:41:18.987560] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:35.549 [2024-04-15 20:41:18.987581] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:35.549 20:41:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:35.807 BaseBdev1 00:14:35.807 [2024-04-15 20:41:19.222728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.807 20:41:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:35.807 20:41:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:35.807 20:41:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:35.807 20:41:19 -- common/autotest_common.sh@889 -- # local i 00:14:35.807 20:41:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:35.807 20:41:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:35.807 20:41:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:36.066 20:41:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.325 [ 00:14:36.325 { 00:14:36.325 "name": "BaseBdev1", 00:14:36.325 "aliases": [ 00:14:36.325 "8c31abe6-1866-4c30-887d-30babd282572" 00:14:36.325 ], 00:14:36.325 "product_name": "Malloc disk", 00:14:36.325 "block_size": 512, 00:14:36.325 "num_blocks": 65536, 00:14:36.325 "uuid": "8c31abe6-1866-4c30-887d-30babd282572", 00:14:36.325 "assigned_rate_limits": { 00:14:36.325 "rw_ios_per_sec": 0, 00:14:36.325 "rw_mbytes_per_sec": 0, 00:14:36.325 "r_mbytes_per_sec": 0, 00:14:36.325 "w_mbytes_per_sec": 0 00:14:36.325 }, 00:14:36.325 "claimed": true, 00:14:36.325 "claim_type": "exclusive_write", 00:14:36.325 "zoned": false, 00:14:36.325 "supported_io_types": { 00:14:36.325 "read": true, 00:14:36.325 "write": true, 00:14:36.325 "unmap": true, 00:14:36.325 "write_zeroes": true, 00:14:36.325 "flush": true, 00:14:36.325 "reset": true, 00:14:36.325 "compare": false, 00:14:36.325 "compare_and_write": false, 00:14:36.325 "abort": true, 00:14:36.325 "nvme_admin": false, 00:14:36.325 "nvme_io": false 00:14:36.325 }, 00:14:36.325 "memory_domains": [ 00:14:36.325 { 00:14:36.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.326 "dma_device_type": 2 00:14:36.326 } 00:14:36.326 ], 00:14:36.326 "driver_specific": {} 00:14:36.326 } 00:14:36.326 ] 00:14:36.326 20:41:19 -- common/autotest_common.sh@895 -- # return 0 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:36.326 "name": "Existed_Raid", 00:14:36.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.326 "strip_size_kb": 64, 00:14:36.326 "state": "configuring", 00:14:36.326 "raid_level": "raid0", 00:14:36.326 "superblock": false, 00:14:36.326 "num_base_bdevs": 4, 00:14:36.326 "num_base_bdevs_discovered": 1, 00:14:36.326 "num_base_bdevs_operational": 4, 00:14:36.326 "base_bdevs_list": [ 00:14:36.326 { 00:14:36.326 "name": "BaseBdev1", 00:14:36.326 "uuid": "8c31abe6-1866-4c30-887d-30babd282572", 00:14:36.326 "is_configured": true, 00:14:36.326 "data_offset": 0, 00:14:36.326 "data_size": 65536 00:14:36.326 }, 00:14:36.326 { 00:14:36.326 "name": "BaseBdev2", 00:14:36.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.326 "is_configured": false, 00:14:36.326 "data_offset": 0, 00:14:36.326 "data_size": 0 00:14:36.326 }, 00:14:36.326 { 00:14:36.326 "name": "BaseBdev3", 00:14:36.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.326 "is_configured": false, 00:14:36.326 "data_offset": 0, 00:14:36.326 "data_size": 0 00:14:36.326 }, 00:14:36.326 { 00:14:36.326 "name": "BaseBdev4", 00:14:36.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.326 "is_configured": false, 00:14:36.326 "data_offset": 0, 00:14:36.326 "data_size": 0 00:14:36.326 } 00:14:36.326 ] 00:14:36.326 }' 00:14:36.326 20:41:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:36.326 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:14:36.893 20:41:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:37.152 [2024-04-15 20:41:20.441005] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.152 [2024-04-15 20:41:20.441050] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:37.152 [2024-04-15 20:41:20.616828] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.152 [2024-04-15 20:41:20.618291] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.152 [2024-04-15 20:41:20.618361] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.152 [2024-04-15 20:41:20.618380] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:37.152 [2024-04-15 20:41:20.618406] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:37.152 [2024-04-15 20:41:20.618414] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:37.152 [2024-04-15 20:41:20.618431] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.152 20:41:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.411 20:41:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.411 "name": "Existed_Raid", 00:14:37.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.411 "strip_size_kb": 64, 00:14:37.411 "state": "configuring", 00:14:37.411 "raid_level": "raid0", 00:14:37.411 "superblock": false, 00:14:37.411 "num_base_bdevs": 4, 00:14:37.411 "num_base_bdevs_discovered": 1, 00:14:37.411 "num_base_bdevs_operational": 4, 00:14:37.411 "base_bdevs_list": [ 00:14:37.411 { 00:14:37.411 "name": "BaseBdev1", 00:14:37.411 "uuid": "8c31abe6-1866-4c30-887d-30babd282572", 00:14:37.411 "is_configured": true, 00:14:37.411 "data_offset": 0, 00:14:37.411 "data_size": 65536 00:14:37.411 }, 00:14:37.411 { 00:14:37.411 "name": "BaseBdev2", 00:14:37.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.411 "is_configured": false, 00:14:37.411 "data_offset": 0, 00:14:37.411 "data_size": 0 00:14:37.411 }, 00:14:37.411 { 00:14:37.411 "name": "BaseBdev3", 00:14:37.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.411 "is_configured": false, 00:14:37.411 "data_offset": 0, 00:14:37.411 "data_size": 0 00:14:37.411 }, 00:14:37.411 { 00:14:37.411 "name": "BaseBdev4", 00:14:37.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.411 "is_configured": false, 00:14:37.411 "data_offset": 0, 00:14:37.411 "data_size": 0 00:14:37.411 } 00:14:37.411 ] 00:14:37.411 }' 00:14:37.411 20:41:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.411 20:41:20 -- common/autotest_common.sh@10 -- # set +x 00:14:37.978 20:41:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:38.238 [2024-04-15 20:41:21.606475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.238 BaseBdev2 00:14:38.238 20:41:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:38.238 20:41:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:38.238 20:41:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:38.238 20:41:21 -- common/autotest_common.sh@889 -- # local i 00:14:38.238 20:41:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:38.238 20:41:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:38.238 20:41:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:38.496 20:41:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:38.496 [ 00:14:38.496 { 00:14:38.496 "name": "BaseBdev2", 00:14:38.496 "aliases": [ 00:14:38.496 "5025e128-f1bb-4ee6-8123-c869db724216" 00:14:38.496 ], 00:14:38.496 "product_name": "Malloc disk", 00:14:38.496 "block_size": 512, 00:14:38.496 "num_blocks": 65536, 00:14:38.496 "uuid": "5025e128-f1bb-4ee6-8123-c869db724216", 00:14:38.496 "assigned_rate_limits": { 00:14:38.496 "rw_ios_per_sec": 0, 00:14:38.496 "rw_mbytes_per_sec": 0, 00:14:38.496 "r_mbytes_per_sec": 0, 00:14:38.496 "w_mbytes_per_sec": 0 00:14:38.496 }, 00:14:38.496 "claimed": true, 00:14:38.496 "claim_type": "exclusive_write", 00:14:38.496 "zoned": false, 00:14:38.496 "supported_io_types": { 00:14:38.496 "read": true, 00:14:38.496 "write": true, 00:14:38.496 "unmap": true, 00:14:38.496 "write_zeroes": true, 00:14:38.496 "flush": true, 00:14:38.496 "reset": true, 00:14:38.496 "compare": false, 00:14:38.496 "compare_and_write": false, 00:14:38.496 "abort": true, 00:14:38.496 "nvme_admin": false, 00:14:38.496 "nvme_io": false 00:14:38.496 }, 00:14:38.496 "memory_domains": [ 00:14:38.496 { 00:14:38.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.496 "dma_device_type": 2 00:14:38.496 } 00:14:38.496 ], 00:14:38.496 "driver_specific": {} 00:14:38.496 } 00:14:38.496 ] 00:14:38.496 20:41:21 -- common/autotest_common.sh@895 -- # return 0 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.496 20:41:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.754 20:41:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.754 "name": "Existed_Raid", 00:14:38.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.754 "strip_size_kb": 64, 00:14:38.754 "state": "configuring", 00:14:38.754 "raid_level": "raid0", 00:14:38.754 "superblock": false, 00:14:38.754 "num_base_bdevs": 4, 00:14:38.754 "num_base_bdevs_discovered": 2, 00:14:38.754 "num_base_bdevs_operational": 4, 00:14:38.754 "base_bdevs_list": [ 00:14:38.754 { 00:14:38.754 "name": "BaseBdev1", 00:14:38.754 "uuid": "8c31abe6-1866-4c30-887d-30babd282572", 00:14:38.754 "is_configured": true, 00:14:38.754 "data_offset": 0, 00:14:38.754 "data_size": 65536 00:14:38.754 }, 00:14:38.754 { 00:14:38.754 "name": "BaseBdev2", 00:14:38.754 "uuid": "5025e128-f1bb-4ee6-8123-c869db724216", 00:14:38.754 "is_configured": true, 00:14:38.754 "data_offset": 0, 00:14:38.754 "data_size": 65536 00:14:38.754 }, 00:14:38.754 { 00:14:38.754 "name": "BaseBdev3", 00:14:38.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.754 "is_configured": false, 00:14:38.754 "data_offset": 0, 00:14:38.754 "data_size": 0 00:14:38.754 }, 00:14:38.754 { 00:14:38.754 "name": "BaseBdev4", 00:14:38.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.754 "is_configured": false, 00:14:38.754 "data_offset": 0, 00:14:38.754 "data_size": 0 00:14:38.754 } 00:14:38.754 ] 00:14:38.754 }' 00:14:38.754 20:41:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.754 20:41:22 -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 20:41:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:39.581 BaseBdev3 00:14:39.581 [2024-04-15 20:41:22.912684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.581 20:41:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:39.581 20:41:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:14:39.581 20:41:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:39.581 20:41:22 -- common/autotest_common.sh@889 -- # local i 00:14:39.581 20:41:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:39.581 20:41:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:39.581 20:41:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:39.840 20:41:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:39.840 [ 00:14:39.840 { 00:14:39.840 "name": "BaseBdev3", 00:14:39.840 "aliases": [ 00:14:39.840 "84906d22-bb4c-4fae-b5dc-56aee3c9a3be" 00:14:39.840 ], 00:14:39.840 "product_name": "Malloc disk", 00:14:39.840 "block_size": 512, 00:14:39.840 "num_blocks": 65536, 00:14:39.840 "uuid": "84906d22-bb4c-4fae-b5dc-56aee3c9a3be", 00:14:39.840 "assigned_rate_limits": { 00:14:39.840 "rw_ios_per_sec": 0, 00:14:39.840 "rw_mbytes_per_sec": 0, 00:14:39.840 "r_mbytes_per_sec": 0, 00:14:39.840 "w_mbytes_per_sec": 0 00:14:39.840 }, 00:14:39.840 "claimed": true, 00:14:39.840 "claim_type": "exclusive_write", 00:14:39.840 "zoned": false, 00:14:39.840 "supported_io_types": { 00:14:39.840 "read": true, 00:14:39.840 "write": true, 00:14:39.840 "unmap": true, 00:14:39.840 "write_zeroes": true, 00:14:39.840 "flush": true, 00:14:39.840 "reset": true, 00:14:39.840 "compare": false, 00:14:39.840 "compare_and_write": false, 00:14:39.840 "abort": true, 00:14:39.840 "nvme_admin": false, 00:14:39.840 "nvme_io": false 00:14:39.840 }, 00:14:39.840 "memory_domains": [ 00:14:39.840 { 00:14:39.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.840 "dma_device_type": 2 00:14:39.840 } 00:14:39.840 ], 00:14:39.840 "driver_specific": {} 00:14:39.840 } 00:14:39.840 ] 00:14:39.840 20:41:23 -- common/autotest_common.sh@895 -- # return 0 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:39.840 20:41:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:39.841 20:41:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.841 20:41:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.100 20:41:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:40.100 "name": "Existed_Raid", 00:14:40.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.100 "strip_size_kb": 64, 00:14:40.100 "state": "configuring", 00:14:40.100 "raid_level": "raid0", 00:14:40.100 "superblock": false, 00:14:40.100 "num_base_bdevs": 4, 00:14:40.100 "num_base_bdevs_discovered": 3, 00:14:40.100 "num_base_bdevs_operational": 4, 00:14:40.100 "base_bdevs_list": [ 00:14:40.100 { 00:14:40.100 "name": "BaseBdev1", 00:14:40.100 "uuid": "8c31abe6-1866-4c30-887d-30babd282572", 00:14:40.100 "is_configured": true, 00:14:40.100 "data_offset": 0, 00:14:40.100 "data_size": 65536 00:14:40.100 }, 00:14:40.100 { 00:14:40.100 "name": "BaseBdev2", 00:14:40.100 "uuid": "5025e128-f1bb-4ee6-8123-c869db724216", 00:14:40.100 "is_configured": true, 00:14:40.100 "data_offset": 0, 00:14:40.100 "data_size": 65536 00:14:40.100 }, 00:14:40.100 { 00:14:40.100 "name": "BaseBdev3", 00:14:40.100 "uuid": "84906d22-bb4c-4fae-b5dc-56aee3c9a3be", 00:14:40.100 "is_configured": true, 00:14:40.100 "data_offset": 0, 00:14:40.100 "data_size": 65536 00:14:40.100 }, 00:14:40.100 { 00:14:40.100 "name": "BaseBdev4", 00:14:40.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.100 "is_configured": false, 00:14:40.100 "data_offset": 0, 00:14:40.100 "data_size": 0 00:14:40.100 } 00:14:40.100 ] 00:14:40.100 }' 00:14:40.100 20:41:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:40.100 20:41:23 -- common/autotest_common.sh@10 -- # set +x 00:14:40.667 20:41:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:40.926 [2024-04-15 20:41:24.223831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.926 [2024-04-15 20:41:24.223875] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:14:40.926 [2024-04-15 20:41:24.223884] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:40.926 [2024-04-15 20:41:24.223991] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:40.926 [2024-04-15 20:41:24.224196] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:14:40.926 [2024-04-15 20:41:24.224206] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:14:40.926 [2024-04-15 20:41:24.224369] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.926 BaseBdev4 00:14:40.926 20:41:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:14:40.926 20:41:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:14:40.927 20:41:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:40.927 20:41:24 -- common/autotest_common.sh@889 -- # local i 00:14:40.927 20:41:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:40.927 20:41:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:40.927 20:41:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:40.927 20:41:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:41.186 [ 00:14:41.186 { 00:14:41.186 "name": "BaseBdev4", 00:14:41.186 "aliases": [ 00:14:41.186 "0e17614f-3cf7-4e51-968f-0f9034e23993" 00:14:41.186 ], 00:14:41.186 "product_name": "Malloc disk", 00:14:41.186 "block_size": 512, 00:14:41.186 "num_blocks": 65536, 00:14:41.186 "uuid": "0e17614f-3cf7-4e51-968f-0f9034e23993", 00:14:41.186 "assigned_rate_limits": { 00:14:41.186 "rw_ios_per_sec": 0, 00:14:41.186 "rw_mbytes_per_sec": 0, 00:14:41.186 "r_mbytes_per_sec": 0, 00:14:41.186 "w_mbytes_per_sec": 0 00:14:41.186 }, 00:14:41.186 "claimed": true, 00:14:41.186 "claim_type": "exclusive_write", 00:14:41.186 "zoned": false, 00:14:41.186 "supported_io_types": { 00:14:41.186 "read": true, 00:14:41.186 "write": true, 00:14:41.186 "unmap": true, 00:14:41.186 "write_zeroes": true, 00:14:41.186 "flush": true, 00:14:41.186 "reset": true, 00:14:41.186 "compare": false, 00:14:41.186 "compare_and_write": false, 00:14:41.186 "abort": true, 00:14:41.186 "nvme_admin": false, 00:14:41.186 "nvme_io": false 00:14:41.186 }, 00:14:41.186 "memory_domains": [ 00:14:41.186 { 00:14:41.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.186 "dma_device_type": 2 00:14:41.186 } 00:14:41.186 ], 00:14:41.186 "driver_specific": {} 00:14:41.186 } 00:14:41.186 ] 00:14:41.186 20:41:24 -- common/autotest_common.sh@895 -- # return 0 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.186 20:41:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.445 20:41:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.445 "name": "Existed_Raid", 00:14:41.445 "uuid": "3182068d-24e0-49d2-8393-97bafe9132f5", 00:14:41.445 "strip_size_kb": 64, 00:14:41.445 "state": "online", 00:14:41.445 "raid_level": "raid0", 00:14:41.445 "superblock": false, 00:14:41.445 "num_base_bdevs": 4, 00:14:41.445 "num_base_bdevs_discovered": 4, 00:14:41.445 "num_base_bdevs_operational": 4, 00:14:41.445 "base_bdevs_list": [ 00:14:41.445 { 00:14:41.445 "name": "BaseBdev1", 00:14:41.445 "uuid": "8c31abe6-1866-4c30-887d-30babd282572", 00:14:41.445 "is_configured": true, 00:14:41.445 "data_offset": 0, 00:14:41.445 "data_size": 65536 00:14:41.445 }, 00:14:41.445 { 00:14:41.445 "name": "BaseBdev2", 00:14:41.445 "uuid": "5025e128-f1bb-4ee6-8123-c869db724216", 00:14:41.445 "is_configured": true, 00:14:41.445 "data_offset": 0, 00:14:41.445 "data_size": 65536 00:14:41.445 }, 00:14:41.445 { 00:14:41.445 "name": "BaseBdev3", 00:14:41.445 "uuid": "84906d22-bb4c-4fae-b5dc-56aee3c9a3be", 00:14:41.445 "is_configured": true, 00:14:41.445 "data_offset": 0, 00:14:41.445 "data_size": 65536 00:14:41.445 }, 00:14:41.445 { 00:14:41.445 "name": "BaseBdev4", 00:14:41.445 "uuid": "0e17614f-3cf7-4e51-968f-0f9034e23993", 00:14:41.445 "is_configured": true, 00:14:41.445 "data_offset": 0, 00:14:41.445 "data_size": 65536 00:14:41.445 } 00:14:41.445 ] 00:14:41.445 }' 00:14:41.445 20:41:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.445 20:41:24 -- common/autotest_common.sh@10 -- # set +x 00:14:42.013 20:41:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:42.271 [2024-04-15 20:41:25.601935] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:42.271 [2024-04-15 20:41:25.601972] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.271 [2024-04-15 20:41:25.602017] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.271 20:41:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.530 20:41:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.530 "name": "Existed_Raid", 00:14:42.530 "uuid": "3182068d-24e0-49d2-8393-97bafe9132f5", 00:14:42.530 "strip_size_kb": 64, 00:14:42.530 "state": "offline", 00:14:42.530 "raid_level": "raid0", 00:14:42.530 "superblock": false, 00:14:42.530 "num_base_bdevs": 4, 00:14:42.530 "num_base_bdevs_discovered": 3, 00:14:42.530 "num_base_bdevs_operational": 3, 00:14:42.530 "base_bdevs_list": [ 00:14:42.530 { 00:14:42.530 "name": null, 00:14:42.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.530 "is_configured": false, 00:14:42.530 "data_offset": 0, 00:14:42.530 "data_size": 65536 00:14:42.530 }, 00:14:42.530 { 00:14:42.530 "name": "BaseBdev2", 00:14:42.530 "uuid": "5025e128-f1bb-4ee6-8123-c869db724216", 00:14:42.530 "is_configured": true, 00:14:42.530 "data_offset": 0, 00:14:42.530 "data_size": 65536 00:14:42.530 }, 00:14:42.530 { 00:14:42.530 "name": "BaseBdev3", 00:14:42.530 "uuid": "84906d22-bb4c-4fae-b5dc-56aee3c9a3be", 00:14:42.530 "is_configured": true, 00:14:42.530 "data_offset": 0, 00:14:42.530 "data_size": 65536 00:14:42.530 }, 00:14:42.530 { 00:14:42.530 "name": "BaseBdev4", 00:14:42.530 "uuid": "0e17614f-3cf7-4e51-968f-0f9034e23993", 00:14:42.530 "is_configured": true, 00:14:42.530 "data_offset": 0, 00:14:42.530 "data_size": 65536 00:14:42.530 } 00:14:42.530 ] 00:14:42.530 }' 00:14:42.530 20:41:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.530 20:41:25 -- common/autotest_common.sh@10 -- # set +x 00:14:43.097 20:41:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:43.097 20:41:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:43.097 20:41:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.097 20:41:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:43.356 20:41:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:43.356 20:41:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:43.356 20:41:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:43.616 [2024-04-15 20:41:26.891813] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:43.616 20:41:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:43.616 20:41:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:43.616 20:41:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.616 20:41:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:43.875 20:41:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:43.875 20:41:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:43.875 20:41:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:43.875 [2024-04-15 20:41:27.367520] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.134 20:41:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:44.134 20:41:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:44.134 20:41:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.134 20:41:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:44.393 20:41:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:44.393 20:41:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:44.393 20:41:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:44.393 [2024-04-15 20:41:27.822862] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:44.393 [2024-04-15 20:41:27.822961] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:14:44.652 20:41:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:44.652 20:41:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:44.652 20:41:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.652 20:41:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:44.912 20:41:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:44.912 20:41:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:44.912 20:41:28 -- bdev/bdev_raid.sh@287 -- # killprocess 52466 00:14:44.912 20:41:28 -- common/autotest_common.sh@926 -- # '[' -z 52466 ']' 00:14:44.912 20:41:28 -- common/autotest_common.sh@930 -- # kill -0 52466 00:14:44.912 20:41:28 -- common/autotest_common.sh@931 -- # uname 00:14:44.912 20:41:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:44.912 20:41:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 52466 00:14:44.912 killing process with pid 52466 00:14:44.912 20:41:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:44.912 20:41:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:44.912 20:41:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52466' 00:14:44.912 20:41:28 -- common/autotest_common.sh@945 -- # kill 52466 00:14:44.912 20:41:28 -- common/autotest_common.sh@950 -- # wait 52466 00:14:44.912 [2024-04-15 20:41:28.212204] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.912 [2024-04-15 20:41:28.212341] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.289 ************************************ 00:14:46.289 END TEST raid_state_function_test 00:14:46.289 ************************************ 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:46.289 00:14:46.289 real 0m13.381s 00:14:46.289 user 0m22.863s 00:14:46.289 sys 0m1.679s 00:14:46.289 20:41:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.289 20:41:29 -- common/autotest_common.sh@10 -- # set +x 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:14:46.289 20:41:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:46.289 20:41:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:46.289 20:41:29 -- common/autotest_common.sh@10 -- # set +x 00:14:46.289 ************************************ 00:14:46.289 START TEST raid_state_function_test_sb 00:14:46.289 ************************************ 00:14:46.289 20:41:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:46.289 Process raid pid: 52899 00:14:46.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:46.289 20:41:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:46.290 20:41:29 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:46.290 20:41:29 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:46.290 20:41:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=52899 00:14:46.290 20:41:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 52899' 00:14:46.290 20:41:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 52899 /var/tmp/spdk-raid.sock 00:14:46.290 20:41:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:46.290 20:41:29 -- common/autotest_common.sh@819 -- # '[' -z 52899 ']' 00:14:46.290 20:41:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:46.290 20:41:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:46.290 20:41:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:46.290 20:41:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:46.290 20:41:29 -- common/autotest_common.sh@10 -- # set +x 00:14:46.290 [2024-04-15 20:41:29.761697] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:46.290 [2024-04-15 20:41:29.761861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.548 [2024-04-15 20:41:29.918867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.807 [2024-04-15 20:41:30.116724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.066 [2024-04-15 20:41:30.317017] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.066 20:41:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:47.066 20:41:30 -- common/autotest_common.sh@852 -- # return 0 00:14:47.066 20:41:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:47.325 [2024-04-15 20:41:30.699293] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.325 [2024-04-15 20:41:30.699359] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.325 [2024-04-15 20:41:30.699371] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.325 [2024-04-15 20:41:30.699387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.325 [2024-04-15 20:41:30.699395] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:47.325 [2024-04-15 20:41:30.699434] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:47.325 [2024-04-15 20:41:30.699443] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:47.325 [2024-04-15 20:41:30.699463] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.325 20:41:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.585 20:41:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.585 "name": "Existed_Raid", 00:14:47.585 "uuid": "0ec888d4-ff3f-4922-b8b4-07be758aaf7c", 00:14:47.585 "strip_size_kb": 64, 00:14:47.585 "state": "configuring", 00:14:47.585 "raid_level": "raid0", 00:14:47.585 "superblock": true, 00:14:47.585 "num_base_bdevs": 4, 00:14:47.585 "num_base_bdevs_discovered": 0, 00:14:47.585 "num_base_bdevs_operational": 4, 00:14:47.585 "base_bdevs_list": [ 00:14:47.585 { 00:14:47.585 "name": "BaseBdev1", 00:14:47.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.585 "is_configured": false, 00:14:47.585 "data_offset": 0, 00:14:47.585 "data_size": 0 00:14:47.585 }, 00:14:47.585 { 00:14:47.585 "name": "BaseBdev2", 00:14:47.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.585 "is_configured": false, 00:14:47.585 "data_offset": 0, 00:14:47.585 "data_size": 0 00:14:47.585 }, 00:14:47.585 { 00:14:47.585 "name": "BaseBdev3", 00:14:47.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.585 "is_configured": false, 00:14:47.585 "data_offset": 0, 00:14:47.585 "data_size": 0 00:14:47.585 }, 00:14:47.585 { 00:14:47.585 "name": "BaseBdev4", 00:14:47.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.585 "is_configured": false, 00:14:47.585 "data_offset": 0, 00:14:47.585 "data_size": 0 00:14:47.585 } 00:14:47.585 ] 00:14:47.585 }' 00:14:47.585 20:41:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.585 20:41:30 -- common/autotest_common.sh@10 -- # set +x 00:14:48.155 20:41:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:48.155 [2024-04-15 20:41:31.629787] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.155 [2024-04-15 20:41:31.629828] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:14:48.155 20:41:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:48.415 [2024-04-15 20:41:31.805675] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.415 [2024-04-15 20:41:31.805738] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.415 [2024-04-15 20:41:31.805749] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.415 [2024-04-15 20:41:31.805780] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.415 [2024-04-15 20:41:31.805789] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:48.415 [2024-04-15 20:41:31.805810] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:48.415 [2024-04-15 20:41:31.805817] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:48.415 [2024-04-15 20:41:31.805838] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:48.415 20:41:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:48.675 [2024-04-15 20:41:32.021543] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.675 BaseBdev1 00:14:48.675 20:41:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:48.675 20:41:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:48.675 20:41:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:48.675 20:41:32 -- common/autotest_common.sh@889 -- # local i 00:14:48.675 20:41:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:48.675 20:41:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:48.675 20:41:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.933 20:41:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:48.933 [ 00:14:48.933 { 00:14:48.933 "name": "BaseBdev1", 00:14:48.933 "aliases": [ 00:14:48.933 "27254db1-e698-41c5-9137-e2d80281330a" 00:14:48.933 ], 00:14:48.933 "product_name": "Malloc disk", 00:14:48.933 "block_size": 512, 00:14:48.933 "num_blocks": 65536, 00:14:48.933 "uuid": "27254db1-e698-41c5-9137-e2d80281330a", 00:14:48.933 "assigned_rate_limits": { 00:14:48.933 "rw_ios_per_sec": 0, 00:14:48.933 "rw_mbytes_per_sec": 0, 00:14:48.933 "r_mbytes_per_sec": 0, 00:14:48.933 "w_mbytes_per_sec": 0 00:14:48.933 }, 00:14:48.933 "claimed": true, 00:14:48.933 "claim_type": "exclusive_write", 00:14:48.933 "zoned": false, 00:14:48.933 "supported_io_types": { 00:14:48.933 "read": true, 00:14:48.933 "write": true, 00:14:48.933 "unmap": true, 00:14:48.933 "write_zeroes": true, 00:14:48.933 "flush": true, 00:14:48.933 "reset": true, 00:14:48.933 "compare": false, 00:14:48.933 "compare_and_write": false, 00:14:48.933 "abort": true, 00:14:48.933 "nvme_admin": false, 00:14:48.933 "nvme_io": false 00:14:48.933 }, 00:14:48.933 "memory_domains": [ 00:14:48.933 { 00:14:48.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.933 "dma_device_type": 2 00:14:48.933 } 00:14:48.933 ], 00:14:48.933 "driver_specific": {} 00:14:48.933 } 00:14:48.933 ] 00:14:48.933 20:41:32 -- common/autotest_common.sh@895 -- # return 0 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.933 20:41:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.192 20:41:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.193 "name": "Existed_Raid", 00:14:49.193 "uuid": "83a4ebc8-e53e-4896-a90d-011a44e310ea", 00:14:49.193 "strip_size_kb": 64, 00:14:49.193 "state": "configuring", 00:14:49.193 "raid_level": "raid0", 00:14:49.193 "superblock": true, 00:14:49.193 "num_base_bdevs": 4, 00:14:49.193 "num_base_bdevs_discovered": 1, 00:14:49.193 "num_base_bdevs_operational": 4, 00:14:49.193 "base_bdevs_list": [ 00:14:49.193 { 00:14:49.193 "name": "BaseBdev1", 00:14:49.193 "uuid": "27254db1-e698-41c5-9137-e2d80281330a", 00:14:49.193 "is_configured": true, 00:14:49.193 "data_offset": 2048, 00:14:49.193 "data_size": 63488 00:14:49.193 }, 00:14:49.193 { 00:14:49.193 "name": "BaseBdev2", 00:14:49.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.193 "is_configured": false, 00:14:49.193 "data_offset": 0, 00:14:49.193 "data_size": 0 00:14:49.193 }, 00:14:49.193 { 00:14:49.193 "name": "BaseBdev3", 00:14:49.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.193 "is_configured": false, 00:14:49.193 "data_offset": 0, 00:14:49.193 "data_size": 0 00:14:49.193 }, 00:14:49.193 { 00:14:49.193 "name": "BaseBdev4", 00:14:49.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.193 "is_configured": false, 00:14:49.193 "data_offset": 0, 00:14:49.193 "data_size": 0 00:14:49.193 } 00:14:49.193 ] 00:14:49.193 }' 00:14:49.193 20:41:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.193 20:41:32 -- common/autotest_common.sh@10 -- # set +x 00:14:49.759 20:41:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:49.759 [2024-04-15 20:41:33.236496] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.759 [2024-04-15 20:41:33.236546] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:14:49.759 20:41:33 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:49.759 20:41:33 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:50.017 20:41:33 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:50.276 BaseBdev1 00:14:50.276 20:41:33 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:50.276 20:41:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:50.276 20:41:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:50.276 20:41:33 -- common/autotest_common.sh@889 -- # local i 00:14:50.276 20:41:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:50.276 20:41:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:50.276 20:41:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:50.535 20:41:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:50.535 [ 00:14:50.535 { 00:14:50.535 "name": "BaseBdev1", 00:14:50.535 "aliases": [ 00:14:50.535 "cbb0d6a2-b2fb-463c-91fe-9b7f50f65c55" 00:14:50.535 ], 00:14:50.535 "product_name": "Malloc disk", 00:14:50.535 "block_size": 512, 00:14:50.535 "num_blocks": 65536, 00:14:50.535 "uuid": "cbb0d6a2-b2fb-463c-91fe-9b7f50f65c55", 00:14:50.535 "assigned_rate_limits": { 00:14:50.535 "rw_ios_per_sec": 0, 00:14:50.535 "rw_mbytes_per_sec": 0, 00:14:50.535 "r_mbytes_per_sec": 0, 00:14:50.535 "w_mbytes_per_sec": 0 00:14:50.535 }, 00:14:50.535 "claimed": false, 00:14:50.535 "zoned": false, 00:14:50.535 "supported_io_types": { 00:14:50.535 "read": true, 00:14:50.535 "write": true, 00:14:50.535 "unmap": true, 00:14:50.535 "write_zeroes": true, 00:14:50.535 "flush": true, 00:14:50.535 "reset": true, 00:14:50.535 "compare": false, 00:14:50.535 "compare_and_write": false, 00:14:50.535 "abort": true, 00:14:50.535 "nvme_admin": false, 00:14:50.535 "nvme_io": false 00:14:50.535 }, 00:14:50.535 "memory_domains": [ 00:14:50.535 { 00:14:50.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.535 "dma_device_type": 2 00:14:50.535 } 00:14:50.535 ], 00:14:50.535 "driver_specific": {} 00:14:50.535 } 00:14:50.535 ] 00:14:50.535 20:41:34 -- common/autotest_common.sh@895 -- # return 0 00:14:50.535 20:41:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:50.794 [2024-04-15 20:41:34.154866] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.794 [2024-04-15 20:41:34.156293] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.794 [2024-04-15 20:41:34.156363] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.794 [2024-04-15 20:41:34.156374] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:50.794 [2024-04-15 20:41:34.156397] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:50.794 [2024-04-15 20:41:34.156405] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:50.794 [2024-04-15 20:41:34.156421] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.794 20:41:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.054 20:41:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:51.054 "name": "Existed_Raid", 00:14:51.054 "uuid": "60fb784d-d48e-4a5c-9f80-9e47977548df", 00:14:51.054 "strip_size_kb": 64, 00:14:51.054 "state": "configuring", 00:14:51.054 "raid_level": "raid0", 00:14:51.054 "superblock": true, 00:14:51.054 "num_base_bdevs": 4, 00:14:51.054 "num_base_bdevs_discovered": 1, 00:14:51.054 "num_base_bdevs_operational": 4, 00:14:51.054 "base_bdevs_list": [ 00:14:51.054 { 00:14:51.054 "name": "BaseBdev1", 00:14:51.054 "uuid": "cbb0d6a2-b2fb-463c-91fe-9b7f50f65c55", 00:14:51.054 "is_configured": true, 00:14:51.054 "data_offset": 2048, 00:14:51.054 "data_size": 63488 00:14:51.054 }, 00:14:51.054 { 00:14:51.054 "name": "BaseBdev2", 00:14:51.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.054 "is_configured": false, 00:14:51.054 "data_offset": 0, 00:14:51.054 "data_size": 0 00:14:51.054 }, 00:14:51.054 { 00:14:51.054 "name": "BaseBdev3", 00:14:51.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.054 "is_configured": false, 00:14:51.054 "data_offset": 0, 00:14:51.054 "data_size": 0 00:14:51.054 }, 00:14:51.054 { 00:14:51.054 "name": "BaseBdev4", 00:14:51.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.054 "is_configured": false, 00:14:51.054 "data_offset": 0, 00:14:51.054 "data_size": 0 00:14:51.054 } 00:14:51.054 ] 00:14:51.054 }' 00:14:51.054 20:41:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:51.054 20:41:34 -- common/autotest_common.sh@10 -- # set +x 00:14:51.622 20:41:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:51.622 BaseBdev2 00:14:51.622 [2024-04-15 20:41:35.001807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.622 20:41:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:51.622 20:41:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:51.622 20:41:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:51.622 20:41:35 -- common/autotest_common.sh@889 -- # local i 00:14:51.622 20:41:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:51.622 20:41:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:51.622 20:41:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:51.880 20:41:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:52.139 [ 00:14:52.139 { 00:14:52.139 "name": "BaseBdev2", 00:14:52.139 "aliases": [ 00:14:52.139 "8205cf42-1f4b-4a80-a15c-4c6f1d99a89b" 00:14:52.139 ], 00:14:52.139 "product_name": "Malloc disk", 00:14:52.139 "block_size": 512, 00:14:52.139 "num_blocks": 65536, 00:14:52.139 "uuid": "8205cf42-1f4b-4a80-a15c-4c6f1d99a89b", 00:14:52.139 "assigned_rate_limits": { 00:14:52.139 "rw_ios_per_sec": 0, 00:14:52.139 "rw_mbytes_per_sec": 0, 00:14:52.139 "r_mbytes_per_sec": 0, 00:14:52.139 "w_mbytes_per_sec": 0 00:14:52.140 }, 00:14:52.140 "claimed": true, 00:14:52.140 "claim_type": "exclusive_write", 00:14:52.140 "zoned": false, 00:14:52.140 "supported_io_types": { 00:14:52.140 "read": true, 00:14:52.140 "write": true, 00:14:52.140 "unmap": true, 00:14:52.140 "write_zeroes": true, 00:14:52.140 "flush": true, 00:14:52.140 "reset": true, 00:14:52.140 "compare": false, 00:14:52.140 "compare_and_write": false, 00:14:52.140 "abort": true, 00:14:52.140 "nvme_admin": false, 00:14:52.140 "nvme_io": false 00:14:52.140 }, 00:14:52.140 "memory_domains": [ 00:14:52.140 { 00:14:52.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.140 "dma_device_type": 2 00:14:52.140 } 00:14:52.140 ], 00:14:52.140 "driver_specific": {} 00:14:52.140 } 00:14:52.140 ] 00:14:52.140 20:41:35 -- common/autotest_common.sh@895 -- # return 0 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:52.140 "name": "Existed_Raid", 00:14:52.140 "uuid": "60fb784d-d48e-4a5c-9f80-9e47977548df", 00:14:52.140 "strip_size_kb": 64, 00:14:52.140 "state": "configuring", 00:14:52.140 "raid_level": "raid0", 00:14:52.140 "superblock": true, 00:14:52.140 "num_base_bdevs": 4, 00:14:52.140 "num_base_bdevs_discovered": 2, 00:14:52.140 "num_base_bdevs_operational": 4, 00:14:52.140 "base_bdevs_list": [ 00:14:52.140 { 00:14:52.140 "name": "BaseBdev1", 00:14:52.140 "uuid": "cbb0d6a2-b2fb-463c-91fe-9b7f50f65c55", 00:14:52.140 "is_configured": true, 00:14:52.140 "data_offset": 2048, 00:14:52.140 "data_size": 63488 00:14:52.140 }, 00:14:52.140 { 00:14:52.140 "name": "BaseBdev2", 00:14:52.140 "uuid": "8205cf42-1f4b-4a80-a15c-4c6f1d99a89b", 00:14:52.140 "is_configured": true, 00:14:52.140 "data_offset": 2048, 00:14:52.140 "data_size": 63488 00:14:52.140 }, 00:14:52.140 { 00:14:52.140 "name": "BaseBdev3", 00:14:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.140 "is_configured": false, 00:14:52.140 "data_offset": 0, 00:14:52.140 "data_size": 0 00:14:52.140 }, 00:14:52.140 { 00:14:52.140 "name": "BaseBdev4", 00:14:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.140 "is_configured": false, 00:14:52.140 "data_offset": 0, 00:14:52.140 "data_size": 0 00:14:52.140 } 00:14:52.140 ] 00:14:52.140 }' 00:14:52.140 20:41:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:52.140 20:41:35 -- common/autotest_common.sh@10 -- # set +x 00:14:52.707 20:41:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.966 BaseBdev3 00:14:52.966 [2024-04-15 20:41:36.352951] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.966 20:41:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:14:52.966 20:41:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:14:52.966 20:41:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:52.966 20:41:36 -- common/autotest_common.sh@889 -- # local i 00:14:52.966 20:41:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:52.966 20:41:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:52.966 20:41:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:53.225 20:41:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:53.225 [ 00:14:53.225 { 00:14:53.225 "name": "BaseBdev3", 00:14:53.225 "aliases": [ 00:14:53.225 "ee6e7356-d295-4f1d-a8cf-c000bf5afc2e" 00:14:53.225 ], 00:14:53.225 "product_name": "Malloc disk", 00:14:53.225 "block_size": 512, 00:14:53.225 "num_blocks": 65536, 00:14:53.225 "uuid": "ee6e7356-d295-4f1d-a8cf-c000bf5afc2e", 00:14:53.225 "assigned_rate_limits": { 00:14:53.225 "rw_ios_per_sec": 0, 00:14:53.225 "rw_mbytes_per_sec": 0, 00:14:53.225 "r_mbytes_per_sec": 0, 00:14:53.225 "w_mbytes_per_sec": 0 00:14:53.225 }, 00:14:53.225 "claimed": true, 00:14:53.225 "claim_type": "exclusive_write", 00:14:53.225 "zoned": false, 00:14:53.225 "supported_io_types": { 00:14:53.225 "read": true, 00:14:53.225 "write": true, 00:14:53.225 "unmap": true, 00:14:53.225 "write_zeroes": true, 00:14:53.225 "flush": true, 00:14:53.225 "reset": true, 00:14:53.225 "compare": false, 00:14:53.225 "compare_and_write": false, 00:14:53.225 "abort": true, 00:14:53.225 "nvme_admin": false, 00:14:53.225 "nvme_io": false 00:14:53.225 }, 00:14:53.225 "memory_domains": [ 00:14:53.225 { 00:14:53.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.225 "dma_device_type": 2 00:14:53.225 } 00:14:53.225 ], 00:14:53.225 "driver_specific": {} 00:14:53.225 } 00:14:53.225 ] 00:14:53.225 20:41:36 -- common/autotest_common.sh@895 -- # return 0 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.225 20:41:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.484 20:41:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.484 "name": "Existed_Raid", 00:14:53.484 "uuid": "60fb784d-d48e-4a5c-9f80-9e47977548df", 00:14:53.484 "strip_size_kb": 64, 00:14:53.484 "state": "configuring", 00:14:53.484 "raid_level": "raid0", 00:14:53.484 "superblock": true, 00:14:53.484 "num_base_bdevs": 4, 00:14:53.484 "num_base_bdevs_discovered": 3, 00:14:53.484 "num_base_bdevs_operational": 4, 00:14:53.484 "base_bdevs_list": [ 00:14:53.484 { 00:14:53.484 "name": "BaseBdev1", 00:14:53.484 "uuid": "cbb0d6a2-b2fb-463c-91fe-9b7f50f65c55", 00:14:53.484 "is_configured": true, 00:14:53.484 "data_offset": 2048, 00:14:53.484 "data_size": 63488 00:14:53.484 }, 00:14:53.484 { 00:14:53.484 "name": "BaseBdev2", 00:14:53.484 "uuid": "8205cf42-1f4b-4a80-a15c-4c6f1d99a89b", 00:14:53.484 "is_configured": true, 00:14:53.484 "data_offset": 2048, 00:14:53.484 "data_size": 63488 00:14:53.484 }, 00:14:53.484 { 00:14:53.484 "name": "BaseBdev3", 00:14:53.484 "uuid": "ee6e7356-d295-4f1d-a8cf-c000bf5afc2e", 00:14:53.484 "is_configured": true, 00:14:53.484 "data_offset": 2048, 00:14:53.484 "data_size": 63488 00:14:53.484 }, 00:14:53.484 { 00:14:53.484 "name": "BaseBdev4", 00:14:53.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.484 "is_configured": false, 00:14:53.484 "data_offset": 0, 00:14:53.484 "data_size": 0 00:14:53.484 } 00:14:53.484 ] 00:14:53.484 }' 00:14:53.484 20:41:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.484 20:41:36 -- common/autotest_common.sh@10 -- # set +x 00:14:54.052 20:41:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:54.311 [2024-04-15 20:41:37.625313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:54.311 [2024-04-15 20:41:37.625434] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000029180 00:14:54.311 [2024-04-15 20:41:37.625445] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:54.311 [2024-04-15 20:41:37.625535] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:54.311 BaseBdev4 00:14:54.311 [2024-04-15 20:41:37.625902] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000029180 00:14:54.311 [2024-04-15 20:41:37.625921] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000029180 00:14:54.311 [2024-04-15 20:41:37.626025] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.311 20:41:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:14:54.311 20:41:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:14:54.311 20:41:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:54.311 20:41:37 -- common/autotest_common.sh@889 -- # local i 00:14:54.311 20:41:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:54.311 20:41:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:54.311 20:41:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:54.569 20:41:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:54.569 [ 00:14:54.569 { 00:14:54.569 "name": "BaseBdev4", 00:14:54.569 "aliases": [ 00:14:54.569 "9f1eaeea-5db8-4a9f-8e7e-70ccf086a35b" 00:14:54.569 ], 00:14:54.569 "product_name": "Malloc disk", 00:14:54.569 "block_size": 512, 00:14:54.569 "num_blocks": 65536, 00:14:54.569 "uuid": "9f1eaeea-5db8-4a9f-8e7e-70ccf086a35b", 00:14:54.569 "assigned_rate_limits": { 00:14:54.569 "rw_ios_per_sec": 0, 00:14:54.569 "rw_mbytes_per_sec": 0, 00:14:54.569 "r_mbytes_per_sec": 0, 00:14:54.569 "w_mbytes_per_sec": 0 00:14:54.569 }, 00:14:54.569 "claimed": true, 00:14:54.569 "claim_type": "exclusive_write", 00:14:54.569 "zoned": false, 00:14:54.569 "supported_io_types": { 00:14:54.569 "read": true, 00:14:54.569 "write": true, 00:14:54.569 "unmap": true, 00:14:54.569 "write_zeroes": true, 00:14:54.569 "flush": true, 00:14:54.569 "reset": true, 00:14:54.569 "compare": false, 00:14:54.569 "compare_and_write": false, 00:14:54.569 "abort": true, 00:14:54.569 "nvme_admin": false, 00:14:54.569 "nvme_io": false 00:14:54.569 }, 00:14:54.569 "memory_domains": [ 00:14:54.569 { 00:14:54.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.569 "dma_device_type": 2 00:14:54.569 } 00:14:54.569 ], 00:14:54.569 "driver_specific": {} 00:14:54.569 } 00:14:54.569 ] 00:14:54.569 20:41:37 -- common/autotest_common.sh@895 -- # return 0 00:14:54.569 20:41:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:54.569 20:41:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.570 20:41:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.829 20:41:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.829 "name": "Existed_Raid", 00:14:54.829 "uuid": "60fb784d-d48e-4a5c-9f80-9e47977548df", 00:14:54.829 "strip_size_kb": 64, 00:14:54.829 "state": "online", 00:14:54.829 "raid_level": "raid0", 00:14:54.829 "superblock": true, 00:14:54.829 "num_base_bdevs": 4, 00:14:54.829 "num_base_bdevs_discovered": 4, 00:14:54.829 "num_base_bdevs_operational": 4, 00:14:54.829 "base_bdevs_list": [ 00:14:54.829 { 00:14:54.830 "name": "BaseBdev1", 00:14:54.830 "uuid": "cbb0d6a2-b2fb-463c-91fe-9b7f50f65c55", 00:14:54.830 "is_configured": true, 00:14:54.830 "data_offset": 2048, 00:14:54.830 "data_size": 63488 00:14:54.830 }, 00:14:54.830 { 00:14:54.830 "name": "BaseBdev2", 00:14:54.830 "uuid": "8205cf42-1f4b-4a80-a15c-4c6f1d99a89b", 00:14:54.830 "is_configured": true, 00:14:54.830 "data_offset": 2048, 00:14:54.830 "data_size": 63488 00:14:54.830 }, 00:14:54.830 { 00:14:54.830 "name": "BaseBdev3", 00:14:54.830 "uuid": "ee6e7356-d295-4f1d-a8cf-c000bf5afc2e", 00:14:54.830 "is_configured": true, 00:14:54.830 "data_offset": 2048, 00:14:54.830 "data_size": 63488 00:14:54.830 }, 00:14:54.830 { 00:14:54.830 "name": "BaseBdev4", 00:14:54.830 "uuid": "9f1eaeea-5db8-4a9f-8e7e-70ccf086a35b", 00:14:54.830 "is_configured": true, 00:14:54.830 "data_offset": 2048, 00:14:54.830 "data_size": 63488 00:14:54.830 } 00:14:54.830 ] 00:14:54.830 }' 00:14:54.830 20:41:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.830 20:41:38 -- common/autotest_common.sh@10 -- # set +x 00:14:55.397 20:41:38 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:55.655 [2024-04-15 20:41:38.907408] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.655 [2024-04-15 20:41:38.907433] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.655 [2024-04-15 20:41:38.907467] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.655 20:41:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:55.655 20:41:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:55.655 20:41:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:55.655 20:41:39 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:55.655 20:41:39 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.656 20:41:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.914 20:41:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:55.915 "name": "Existed_Raid", 00:14:55.915 "uuid": "60fb784d-d48e-4a5c-9f80-9e47977548df", 00:14:55.915 "strip_size_kb": 64, 00:14:55.915 "state": "offline", 00:14:55.915 "raid_level": "raid0", 00:14:55.915 "superblock": true, 00:14:55.915 "num_base_bdevs": 4, 00:14:55.915 "num_base_bdevs_discovered": 3, 00:14:55.915 "num_base_bdevs_operational": 3, 00:14:55.915 "base_bdevs_list": [ 00:14:55.915 { 00:14:55.915 "name": null, 00:14:55.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.915 "is_configured": false, 00:14:55.915 "data_offset": 2048, 00:14:55.915 "data_size": 63488 00:14:55.915 }, 00:14:55.915 { 00:14:55.915 "name": "BaseBdev2", 00:14:55.915 "uuid": "8205cf42-1f4b-4a80-a15c-4c6f1d99a89b", 00:14:55.915 "is_configured": true, 00:14:55.915 "data_offset": 2048, 00:14:55.915 "data_size": 63488 00:14:55.915 }, 00:14:55.915 { 00:14:55.915 "name": "BaseBdev3", 00:14:55.915 "uuid": "ee6e7356-d295-4f1d-a8cf-c000bf5afc2e", 00:14:55.915 "is_configured": true, 00:14:55.915 "data_offset": 2048, 00:14:55.915 "data_size": 63488 00:14:55.915 }, 00:14:55.915 { 00:14:55.915 "name": "BaseBdev4", 00:14:55.915 "uuid": "9f1eaeea-5db8-4a9f-8e7e-70ccf086a35b", 00:14:55.915 "is_configured": true, 00:14:55.915 "data_offset": 2048, 00:14:55.915 "data_size": 63488 00:14:55.915 } 00:14:55.915 ] 00:14:55.915 }' 00:14:55.915 20:41:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:55.915 20:41:39 -- common/autotest_common.sh@10 -- # set +x 00:14:56.174 20:41:39 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:56.174 20:41:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:56.174 20:41:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.433 20:41:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:56.433 20:41:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:56.433 20:41:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.433 20:41:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:56.692 [2024-04-15 20:41:39.993757] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.692 20:41:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:56.692 20:41:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:56.692 20:41:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.692 20:41:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:56.950 20:41:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:56.950 20:41:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.950 20:41:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:57.265 [2024-04-15 20:41:40.492339] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:57.265 20:41:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:57.265 20:41:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:57.265 20:41:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.265 20:41:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:57.524 20:41:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:57.524 20:41:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:57.524 20:41:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:57.524 [2024-04-15 20:41:40.931465] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:57.524 [2024-04-15 20:41:40.931515] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029180 name Existed_Raid, state offline 00:14:57.783 20:41:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:57.783 20:41:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:57.783 20:41:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:57.783 20:41:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.783 20:41:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:57.783 20:41:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:57.783 20:41:41 -- bdev/bdev_raid.sh@287 -- # killprocess 52899 00:14:57.783 20:41:41 -- common/autotest_common.sh@926 -- # '[' -z 52899 ']' 00:14:57.783 20:41:41 -- common/autotest_common.sh@930 -- # kill -0 52899 00:14:57.783 20:41:41 -- common/autotest_common.sh@931 -- # uname 00:14:57.783 20:41:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:57.783 20:41:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 52899 00:14:57.783 killing process with pid 52899 00:14:57.783 20:41:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:57.783 20:41:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:57.783 20:41:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 52899' 00:14:57.783 20:41:41 -- common/autotest_common.sh@945 -- # kill 52899 00:14:57.783 20:41:41 -- common/autotest_common.sh@950 -- # wait 52899 00:14:57.783 [2024-04-15 20:41:41.258004] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.783 [2024-04-15 20:41:41.258146] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.688 ************************************ 00:14:59.688 END TEST raid_state_function_test_sb 00:14:59.688 ************************************ 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:59.688 00:14:59.688 real 0m13.106s 00:14:59.688 user 0m22.534s 00:14:59.688 sys 0m1.645s 00:14:59.688 20:41:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.688 20:41:42 -- common/autotest_common.sh@10 -- # set +x 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:14:59.688 20:41:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:59.688 20:41:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:59.688 20:41:42 -- common/autotest_common.sh@10 -- # set +x 00:14:59.688 ************************************ 00:14:59.688 START TEST raid_superblock_test 00:14:59.688 ************************************ 00:14:59.688 20:41:42 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:59.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@357 -- # raid_pid=53332 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@358 -- # waitforlisten 53332 /var/tmp/spdk-raid.sock 00:14:59.688 20:41:42 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:59.688 20:41:42 -- common/autotest_common.sh@819 -- # '[' -z 53332 ']' 00:14:59.688 20:41:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:59.688 20:41:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:59.688 20:41:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:59.688 20:41:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:59.688 20:41:42 -- common/autotest_common.sh@10 -- # set +x 00:14:59.688 [2024-04-15 20:41:42.941112] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:59.688 [2024-04-15 20:41:42.941279] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53332 ] 00:14:59.688 [2024-04-15 20:41:43.110865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.946 [2024-04-15 20:41:43.316919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.205 [2024-04-15 20:41:43.518928] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.140 20:41:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.140 20:41:44 -- common/autotest_common.sh@852 -- # return 0 00:15:01.140 20:41:44 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:01.140 20:41:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:01.140 20:41:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:01.140 20:41:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:01.140 20:41:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:01.140 20:41:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.140 20:41:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.140 20:41:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.140 20:41:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:01.140 malloc1 00:15:01.140 20:41:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:01.399 [2024-04-15 20:41:44.773532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:01.399 [2024-04-15 20:41:44.773626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.399 [2024-04-15 20:41:44.773908] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:15:01.399 [2024-04-15 20:41:44.773956] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.399 pt1 00:15:01.399 [2024-04-15 20:41:44.775542] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.399 [2024-04-15 20:41:44.775584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:01.399 20:41:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:01.399 20:41:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:01.399 20:41:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:01.399 20:41:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:01.399 20:41:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:01.399 20:41:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.399 20:41:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.400 20:41:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.400 20:41:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:01.662 malloc2 00:15:01.662 20:41:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:01.921 [2024-04-15 20:41:45.177170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:01.921 [2024-04-15 20:41:45.177256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.921 [2024-04-15 20:41:45.177308] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:15:01.921 [2024-04-15 20:41:45.177347] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.921 [2024-04-15 20:41:45.179094] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.921 [2024-04-15 20:41:45.179133] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:01.921 pt2 00:15:01.921 20:41:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:01.921 20:41:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:01.921 20:41:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:01.921 20:41:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:01.921 20:41:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:01.921 20:41:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.921 20:41:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.921 20:41:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.921 20:41:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:01.921 malloc3 00:15:01.921 20:41:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:02.179 [2024-04-15 20:41:45.568781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:02.179 [2024-04-15 20:41:45.568865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.179 [2024-04-15 20:41:45.568909] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:15:02.179 [2024-04-15 20:41:45.568949] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.179 [2024-04-15 20:41:45.570696] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.179 [2024-04-15 20:41:45.570748] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:02.179 pt3 00:15:02.179 20:41:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:02.179 20:41:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:02.179 20:41:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:15:02.179 20:41:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:15:02.179 20:41:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:02.179 20:41:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.179 20:41:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.179 20:41:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.179 20:41:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:02.437 malloc4 00:15:02.437 20:41:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:02.694 [2024-04-15 20:41:45.975036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:02.694 [2024-04-15 20:41:45.975119] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.695 [2024-04-15 20:41:45.975152] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ca80 00:15:02.695 [2024-04-15 20:41:45.975201] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.695 [2024-04-15 20:41:45.976847] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.695 [2024-04-15 20:41:45.976895] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:02.695 pt4 00:15:02.695 20:41:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:02.695 20:41:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:02.695 20:41:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:02.695 [2024-04-15 20:41:46.174881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.695 [2024-04-15 20:41:46.176286] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.695 [2024-04-15 20:41:46.176328] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:02.695 [2024-04-15 20:41:46.176374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:02.695 [2024-04-15 20:41:46.176475] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002df80 00:15:02.695 [2024-04-15 20:41:46.176484] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:02.695 [2024-04-15 20:41:46.176579] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:02.695 [2024-04-15 20:41:46.176797] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002df80 00:15:02.695 [2024-04-15 20:41:46.176808] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002df80 00:15:02.695 [2024-04-15 20:41:46.176903] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.695 20:41:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.953 20:41:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:02.953 "name": "raid_bdev1", 00:15:02.953 "uuid": "22191559-4dd5-4d03-b6ec-e0530fc68947", 00:15:02.953 "strip_size_kb": 64, 00:15:02.953 "state": "online", 00:15:02.953 "raid_level": "raid0", 00:15:02.953 "superblock": true, 00:15:02.953 "num_base_bdevs": 4, 00:15:02.953 "num_base_bdevs_discovered": 4, 00:15:02.953 "num_base_bdevs_operational": 4, 00:15:02.953 "base_bdevs_list": [ 00:15:02.953 { 00:15:02.953 "name": "pt1", 00:15:02.953 "uuid": "dab9ca1a-4bc4-5514-9c17-d6c5aab79183", 00:15:02.953 "is_configured": true, 00:15:02.953 "data_offset": 2048, 00:15:02.953 "data_size": 63488 00:15:02.953 }, 00:15:02.953 { 00:15:02.953 "name": "pt2", 00:15:02.953 "uuid": "b8e5641e-f364-5c67-8298-95687be24db3", 00:15:02.953 "is_configured": true, 00:15:02.953 "data_offset": 2048, 00:15:02.953 "data_size": 63488 00:15:02.953 }, 00:15:02.953 { 00:15:02.953 "name": "pt3", 00:15:02.953 "uuid": "33c4447a-6f4c-5ccf-8043-6284fe82f4a2", 00:15:02.953 "is_configured": true, 00:15:02.954 "data_offset": 2048, 00:15:02.954 "data_size": 63488 00:15:02.954 }, 00:15:02.954 { 00:15:02.954 "name": "pt4", 00:15:02.954 "uuid": "0707809b-647b-5db7-8120-f9d1e31cef07", 00:15:02.954 "is_configured": true, 00:15:02.954 "data_offset": 2048, 00:15:02.954 "data_size": 63488 00:15:02.954 } 00:15:02.954 ] 00:15:02.954 }' 00:15:02.954 20:41:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:02.954 20:41:46 -- common/autotest_common.sh@10 -- # set +x 00:15:03.520 20:41:46 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:03.520 20:41:46 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:03.779 [2024-04-15 20:41:47.101513] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.779 20:41:47 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=22191559-4dd5-4d03-b6ec-e0530fc68947 00:15:03.779 20:41:47 -- bdev/bdev_raid.sh@380 -- # '[' -z 22191559-4dd5-4d03-b6ec-e0530fc68947 ']' 00:15:03.779 20:41:47 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:04.038 [2024-04-15 20:41:47.341070] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.038 [2024-04-15 20:41:47.341108] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.038 [2024-04-15 20:41:47.341177] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.038 [2024-04-15 20:41:47.341216] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.038 [2024-04-15 20:41:47.341225] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002df80 name raid_bdev1, state offline 00:15:04.038 20:41:47 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:04.038 20:41:47 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.296 20:41:47 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:04.296 20:41:47 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:04.296 20:41:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.296 20:41:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:04.296 20:41:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.296 20:41:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:04.555 20:41:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.555 20:41:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:04.813 20:41:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.813 20:41:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:04.813 20:41:48 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:04.813 20:41:48 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:05.072 20:41:48 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:05.072 20:41:48 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:05.072 20:41:48 -- common/autotest_common.sh@640 -- # local es=0 00:15:05.072 20:41:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:05.072 20:41:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.072 20:41:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:05.072 20:41:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.072 20:41:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:05.072 20:41:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.072 20:41:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:05.072 20:41:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:05.072 20:41:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:05.072 20:41:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:05.331 [2024-04-15 20:41:48.714932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:05.331 [2024-04-15 20:41:48.716424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:05.331 [2024-04-15 20:41:48.716461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:05.331 [2024-04-15 20:41:48.716479] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:05.331 [2024-04-15 20:41:48.716505] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:05.331 [2024-04-15 20:41:48.716566] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:05.331 [2024-04-15 20:41:48.716593] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:05.331 [2024-04-15 20:41:48.716631] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:15:05.331 [2024-04-15 20:41:48.716661] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.331 [2024-04-15 20:41:48.716671] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002e580 name raid_bdev1, state configuring 00:15:05.331 request: 00:15:05.331 { 00:15:05.331 "name": "raid_bdev1", 00:15:05.331 "raid_level": "raid0", 00:15:05.331 "base_bdevs": [ 00:15:05.331 "malloc1", 00:15:05.331 "malloc2", 00:15:05.331 "malloc3", 00:15:05.331 "malloc4" 00:15:05.331 ], 00:15:05.331 "superblock": false, 00:15:05.331 "strip_size_kb": 64, 00:15:05.331 "method": "bdev_raid_create", 00:15:05.331 "req_id": 1 00:15:05.331 } 00:15:05.331 Got JSON-RPC error response 00:15:05.331 response: 00:15:05.331 { 00:15:05.331 "code": -17, 00:15:05.331 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:05.331 } 00:15:05.331 20:41:48 -- common/autotest_common.sh@643 -- # es=1 00:15:05.331 20:41:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:05.331 20:41:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:05.331 20:41:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:05.331 20:41:48 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:05.331 20:41:48 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.589 20:41:48 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:05.589 20:41:48 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:05.590 20:41:48 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:05.590 [2024-04-15 20:41:49.090392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:05.590 [2024-04-15 20:41:49.090466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.590 [2024-04-15 20:41:49.090519] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:15:05.590 [2024-04-15 20:41:49.090545] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.849 [2024-04-15 20:41:49.092277] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.849 [2024-04-15 20:41:49.092335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:05.849 [2024-04-15 20:41:49.092425] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:05.849 [2024-04-15 20:41:49.092480] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:05.849 pt1 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:05.849 "name": "raid_bdev1", 00:15:05.849 "uuid": "22191559-4dd5-4d03-b6ec-e0530fc68947", 00:15:05.849 "strip_size_kb": 64, 00:15:05.849 "state": "configuring", 00:15:05.849 "raid_level": "raid0", 00:15:05.849 "superblock": true, 00:15:05.849 "num_base_bdevs": 4, 00:15:05.849 "num_base_bdevs_discovered": 1, 00:15:05.849 "num_base_bdevs_operational": 4, 00:15:05.849 "base_bdevs_list": [ 00:15:05.849 { 00:15:05.849 "name": "pt1", 00:15:05.849 "uuid": "dab9ca1a-4bc4-5514-9c17-d6c5aab79183", 00:15:05.849 "is_configured": true, 00:15:05.849 "data_offset": 2048, 00:15:05.849 "data_size": 63488 00:15:05.849 }, 00:15:05.849 { 00:15:05.849 "name": null, 00:15:05.849 "uuid": "b8e5641e-f364-5c67-8298-95687be24db3", 00:15:05.849 "is_configured": false, 00:15:05.849 "data_offset": 2048, 00:15:05.849 "data_size": 63488 00:15:05.849 }, 00:15:05.849 { 00:15:05.849 "name": null, 00:15:05.849 "uuid": "33c4447a-6f4c-5ccf-8043-6284fe82f4a2", 00:15:05.849 "is_configured": false, 00:15:05.849 "data_offset": 2048, 00:15:05.849 "data_size": 63488 00:15:05.849 }, 00:15:05.849 { 00:15:05.849 "name": null, 00:15:05.849 "uuid": "0707809b-647b-5db7-8120-f9d1e31cef07", 00:15:05.849 "is_configured": false, 00:15:05.849 "data_offset": 2048, 00:15:05.849 "data_size": 63488 00:15:05.849 } 00:15:05.849 ] 00:15:05.849 }' 00:15:05.849 20:41:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:05.849 20:41:49 -- common/autotest_common.sh@10 -- # set +x 00:15:06.415 20:41:49 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:15:06.415 20:41:49 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:06.673 [2024-04-15 20:41:49.941104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:06.673 [2024-04-15 20:41:49.941172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.673 [2024-04-15 20:41:49.941251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031880 00:15:06.673 [2024-04-15 20:41:49.941279] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.673 [2024-04-15 20:41:49.941557] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.673 [2024-04-15 20:41:49.941589] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:06.673 [2024-04-15 20:41:49.941868] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:06.673 [2024-04-15 20:41:49.941900] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.673 pt2 00:15:06.673 20:41:49 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:06.673 [2024-04-15 20:41:50.100853] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.673 20:41:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.931 20:41:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.931 "name": "raid_bdev1", 00:15:06.931 "uuid": "22191559-4dd5-4d03-b6ec-e0530fc68947", 00:15:06.931 "strip_size_kb": 64, 00:15:06.931 "state": "configuring", 00:15:06.931 "raid_level": "raid0", 00:15:06.931 "superblock": true, 00:15:06.931 "num_base_bdevs": 4, 00:15:06.931 "num_base_bdevs_discovered": 1, 00:15:06.931 "num_base_bdevs_operational": 4, 00:15:06.931 "base_bdevs_list": [ 00:15:06.931 { 00:15:06.931 "name": "pt1", 00:15:06.931 "uuid": "dab9ca1a-4bc4-5514-9c17-d6c5aab79183", 00:15:06.931 "is_configured": true, 00:15:06.931 "data_offset": 2048, 00:15:06.931 "data_size": 63488 00:15:06.931 }, 00:15:06.931 { 00:15:06.931 "name": null, 00:15:06.931 "uuid": "b8e5641e-f364-5c67-8298-95687be24db3", 00:15:06.931 "is_configured": false, 00:15:06.931 "data_offset": 2048, 00:15:06.931 "data_size": 63488 00:15:06.931 }, 00:15:06.931 { 00:15:06.931 "name": null, 00:15:06.931 "uuid": "33c4447a-6f4c-5ccf-8043-6284fe82f4a2", 00:15:06.931 "is_configured": false, 00:15:06.931 "data_offset": 2048, 00:15:06.931 "data_size": 63488 00:15:06.931 }, 00:15:06.931 { 00:15:06.931 "name": null, 00:15:06.931 "uuid": "0707809b-647b-5db7-8120-f9d1e31cef07", 00:15:06.931 "is_configured": false, 00:15:06.931 "data_offset": 2048, 00:15:06.931 "data_size": 63488 00:15:06.931 } 00:15:06.931 ] 00:15:06.931 }' 00:15:06.931 20:41:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.931 20:41:50 -- common/autotest_common.sh@10 -- # set +x 00:15:07.499 20:41:50 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:07.499 20:41:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:07.499 20:41:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.499 [2024-04-15 20:41:50.931584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.499 [2024-04-15 20:41:50.931798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.499 [2024-04-15 20:41:50.931859] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032d80 00:15:07.499 [2024-04-15 20:41:50.931877] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.499 [2024-04-15 20:41:50.932150] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.499 [2024-04-15 20:41:50.932184] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.499 [2024-04-15 20:41:50.932263] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:07.499 [2024-04-15 20:41:50.932281] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.499 pt2 00:15:07.499 20:41:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:07.499 20:41:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:07.499 20:41:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:07.758 [2024-04-15 20:41:51.075360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:07.758 [2024-04-15 20:41:51.075422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.758 [2024-04-15 20:41:51.075452] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034280 00:15:07.758 [2024-04-15 20:41:51.075474] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.758 [2024-04-15 20:41:51.075920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.758 [2024-04-15 20:41:51.075970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:07.758 [2024-04-15 20:41:51.076042] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:07.758 [2024-04-15 20:41:51.076061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:07.758 pt3 00:15:07.758 20:41:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:07.758 20:41:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:07.758 20:41:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:07.759 [2024-04-15 20:41:51.251088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:07.759 [2024-04-15 20:41:51.251154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.759 [2024-04-15 20:41:51.251184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035780 00:15:07.759 [2024-04-15 20:41:51.251207] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.759 [2024-04-15 20:41:51.251468] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.759 [2024-04-15 20:41:51.251503] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:07.759 [2024-04-15 20:41:51.251570] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:15:07.759 [2024-04-15 20:41:51.251586] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:07.759 [2024-04-15 20:41:51.251817] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000031280 00:15:07.759 [2024-04-15 20:41:51.251837] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:07.759 [2024-04-15 20:41:51.251919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:07.759 [2024-04-15 20:41:51.252101] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000031280 00:15:07.759 [2024-04-15 20:41:51.252112] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000031280 00:15:07.759 [2024-04-15 20:41:51.252194] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.759 pt4 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.017 20:41:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.017 "name": "raid_bdev1", 00:15:08.017 "uuid": "22191559-4dd5-4d03-b6ec-e0530fc68947", 00:15:08.017 "strip_size_kb": 64, 00:15:08.018 "state": "online", 00:15:08.018 "raid_level": "raid0", 00:15:08.018 "superblock": true, 00:15:08.018 "num_base_bdevs": 4, 00:15:08.018 "num_base_bdevs_discovered": 4, 00:15:08.018 "num_base_bdevs_operational": 4, 00:15:08.018 "base_bdevs_list": [ 00:15:08.018 { 00:15:08.018 "name": "pt1", 00:15:08.018 "uuid": "dab9ca1a-4bc4-5514-9c17-d6c5aab79183", 00:15:08.018 "is_configured": true, 00:15:08.018 "data_offset": 2048, 00:15:08.018 "data_size": 63488 00:15:08.018 }, 00:15:08.018 { 00:15:08.018 "name": "pt2", 00:15:08.018 "uuid": "b8e5641e-f364-5c67-8298-95687be24db3", 00:15:08.018 "is_configured": true, 00:15:08.018 "data_offset": 2048, 00:15:08.018 "data_size": 63488 00:15:08.018 }, 00:15:08.018 { 00:15:08.018 "name": "pt3", 00:15:08.018 "uuid": "33c4447a-6f4c-5ccf-8043-6284fe82f4a2", 00:15:08.018 "is_configured": true, 00:15:08.018 "data_offset": 2048, 00:15:08.018 "data_size": 63488 00:15:08.018 }, 00:15:08.018 { 00:15:08.018 "name": "pt4", 00:15:08.018 "uuid": "0707809b-647b-5db7-8120-f9d1e31cef07", 00:15:08.018 "is_configured": true, 00:15:08.018 "data_offset": 2048, 00:15:08.018 "data_size": 63488 00:15:08.018 } 00:15:08.018 ] 00:15:08.018 }' 00:15:08.018 20:41:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.018 20:41:51 -- common/autotest_common.sh@10 -- # set +x 00:15:08.586 20:41:51 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:08.587 20:41:51 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:08.855 [2024-04-15 20:41:52.141841] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.855 20:41:52 -- bdev/bdev_raid.sh@430 -- # '[' 22191559-4dd5-4d03-b6ec-e0530fc68947 '!=' 22191559-4dd5-4d03-b6ec-e0530fc68947 ']' 00:15:08.856 20:41:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:08.856 20:41:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:08.856 20:41:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:08.856 20:41:52 -- bdev/bdev_raid.sh@511 -- # killprocess 53332 00:15:08.856 20:41:52 -- common/autotest_common.sh@926 -- # '[' -z 53332 ']' 00:15:08.856 20:41:52 -- common/autotest_common.sh@930 -- # kill -0 53332 00:15:08.856 20:41:52 -- common/autotest_common.sh@931 -- # uname 00:15:08.856 20:41:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:08.856 20:41:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53332 00:15:08.856 killing process with pid 53332 00:15:08.856 20:41:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:08.856 20:41:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:08.856 20:41:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53332' 00:15:08.856 20:41:52 -- common/autotest_common.sh@945 -- # kill 53332 00:15:08.856 [2024-04-15 20:41:52.179496] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.856 20:41:52 -- common/autotest_common.sh@950 -- # wait 53332 00:15:08.856 [2024-04-15 20:41:52.179552] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.856 [2024-04-15 20:41:52.179589] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.856 [2024-04-15 20:41:52.179598] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000031280 name raid_bdev1, state offline 00:15:09.142 [2024-04-15 20:41:52.523425] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.530 ************************************ 00:15:10.530 END TEST raid_superblock_test 00:15:10.530 ************************************ 00:15:10.530 20:41:53 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:10.530 00:15:10.530 real 0m11.024s 00:15:10.530 user 0m18.273s 00:15:10.531 sys 0m1.279s 00:15:10.531 20:41:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.531 20:41:53 -- common/autotest_common.sh@10 -- # set +x 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:15:10.531 20:41:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:10.531 20:41:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:10.531 20:41:53 -- common/autotest_common.sh@10 -- # set +x 00:15:10.531 ************************************ 00:15:10.531 START TEST raid_state_function_test 00:15:10.531 ************************************ 00:15:10.531 20:41:53 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:10.531 Process raid pid: 53658 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@226 -- # raid_pid=53658 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 53658' 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@228 -- # waitforlisten 53658 /var/tmp/spdk-raid.sock 00:15:10.531 20:41:53 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:10.531 20:41:53 -- common/autotest_common.sh@819 -- # '[' -z 53658 ']' 00:15:10.531 20:41:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:10.531 20:41:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:10.531 20:41:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:10.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:10.531 20:41:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:10.531 20:41:53 -- common/autotest_common.sh@10 -- # set +x 00:15:10.531 [2024-04-15 20:41:54.025135] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:10.531 [2024-04-15 20:41:54.025291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.790 [2024-04-15 20:41:54.207988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.049 [2024-04-15 20:41:54.405541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.308 [2024-04-15 20:41:54.601672] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.568 20:41:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:11.568 20:41:54 -- common/autotest_common.sh@852 -- # return 0 00:15:11.568 20:41:54 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:11.568 [2024-04-15 20:41:55.038392] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.568 [2024-04-15 20:41:55.038478] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.568 [2024-04-15 20:41:55.038489] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.568 [2024-04-15 20:41:55.038505] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.568 [2024-04-15 20:41:55.038513] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.568 [2024-04-15 20:41:55.038555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.568 [2024-04-15 20:41:55.038563] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:11.568 [2024-04-15 20:41:55.038583] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.568 20:41:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.827 20:41:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.827 "name": "Existed_Raid", 00:15:11.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.827 "strip_size_kb": 64, 00:15:11.827 "state": "configuring", 00:15:11.827 "raid_level": "concat", 00:15:11.827 "superblock": false, 00:15:11.827 "num_base_bdevs": 4, 00:15:11.827 "num_base_bdevs_discovered": 0, 00:15:11.827 "num_base_bdevs_operational": 4, 00:15:11.827 "base_bdevs_list": [ 00:15:11.827 { 00:15:11.827 "name": "BaseBdev1", 00:15:11.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.827 "is_configured": false, 00:15:11.827 "data_offset": 0, 00:15:11.827 "data_size": 0 00:15:11.827 }, 00:15:11.827 { 00:15:11.827 "name": "BaseBdev2", 00:15:11.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.827 "is_configured": false, 00:15:11.827 "data_offset": 0, 00:15:11.827 "data_size": 0 00:15:11.827 }, 00:15:11.827 { 00:15:11.827 "name": "BaseBdev3", 00:15:11.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.827 "is_configured": false, 00:15:11.827 "data_offset": 0, 00:15:11.827 "data_size": 0 00:15:11.827 }, 00:15:11.827 { 00:15:11.827 "name": "BaseBdev4", 00:15:11.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.827 "is_configured": false, 00:15:11.827 "data_offset": 0, 00:15:11.827 "data_size": 0 00:15:11.827 } 00:15:11.827 ] 00:15:11.827 }' 00:15:11.827 20:41:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.827 20:41:55 -- common/autotest_common.sh@10 -- # set +x 00:15:12.395 20:41:55 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:12.395 [2024-04-15 20:41:55.885022] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.395 [2024-04-15 20:41:55.885065] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:15:12.654 20:41:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:12.654 [2024-04-15 20:41:56.064804] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.654 [2024-04-15 20:41:56.064869] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.654 [2024-04-15 20:41:56.064879] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.654 [2024-04-15 20:41:56.064908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.654 [2024-04-15 20:41:56.064917] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.654 [2024-04-15 20:41:56.064941] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.654 [2024-04-15 20:41:56.064948] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:12.654 [2024-04-15 20:41:56.064969] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:12.654 20:41:56 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.913 BaseBdev1 00:15:12.913 [2024-04-15 20:41:56.302888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.913 20:41:56 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:12.913 20:41:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:12.913 20:41:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:12.913 20:41:56 -- common/autotest_common.sh@889 -- # local i 00:15:12.913 20:41:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:12.913 20:41:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:12.913 20:41:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:13.172 20:41:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:13.172 [ 00:15:13.172 { 00:15:13.172 "name": "BaseBdev1", 00:15:13.172 "aliases": [ 00:15:13.172 "cea27c94-4562-41a3-a547-41b22c478c35" 00:15:13.172 ], 00:15:13.172 "product_name": "Malloc disk", 00:15:13.172 "block_size": 512, 00:15:13.172 "num_blocks": 65536, 00:15:13.172 "uuid": "cea27c94-4562-41a3-a547-41b22c478c35", 00:15:13.172 "assigned_rate_limits": { 00:15:13.172 "rw_ios_per_sec": 0, 00:15:13.172 "rw_mbytes_per_sec": 0, 00:15:13.172 "r_mbytes_per_sec": 0, 00:15:13.172 "w_mbytes_per_sec": 0 00:15:13.172 }, 00:15:13.172 "claimed": true, 00:15:13.172 "claim_type": "exclusive_write", 00:15:13.172 "zoned": false, 00:15:13.172 "supported_io_types": { 00:15:13.172 "read": true, 00:15:13.172 "write": true, 00:15:13.172 "unmap": true, 00:15:13.172 "write_zeroes": true, 00:15:13.172 "flush": true, 00:15:13.172 "reset": true, 00:15:13.172 "compare": false, 00:15:13.172 "compare_and_write": false, 00:15:13.172 "abort": true, 00:15:13.172 "nvme_admin": false, 00:15:13.172 "nvme_io": false 00:15:13.172 }, 00:15:13.172 "memory_domains": [ 00:15:13.172 { 00:15:13.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.172 "dma_device_type": 2 00:15:13.172 } 00:15:13.172 ], 00:15:13.172 "driver_specific": {} 00:15:13.172 } 00:15:13.172 ] 00:15:13.172 20:41:56 -- common/autotest_common.sh@895 -- # return 0 00:15:13.172 20:41:56 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:13.172 20:41:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.172 20:41:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:13.172 20:41:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:13.172 20:41:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:13.172 20:41:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:13.172 20:41:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.172 20:41:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.172 20:41:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.172 20:41:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.434 20:41:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.434 20:41:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.434 20:41:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.434 "name": "Existed_Raid", 00:15:13.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.434 "strip_size_kb": 64, 00:15:13.434 "state": "configuring", 00:15:13.434 "raid_level": "concat", 00:15:13.434 "superblock": false, 00:15:13.434 "num_base_bdevs": 4, 00:15:13.434 "num_base_bdevs_discovered": 1, 00:15:13.434 "num_base_bdevs_operational": 4, 00:15:13.434 "base_bdevs_list": [ 00:15:13.434 { 00:15:13.434 "name": "BaseBdev1", 00:15:13.434 "uuid": "cea27c94-4562-41a3-a547-41b22c478c35", 00:15:13.434 "is_configured": true, 00:15:13.434 "data_offset": 0, 00:15:13.434 "data_size": 65536 00:15:13.434 }, 00:15:13.434 { 00:15:13.434 "name": "BaseBdev2", 00:15:13.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.434 "is_configured": false, 00:15:13.434 "data_offset": 0, 00:15:13.434 "data_size": 0 00:15:13.434 }, 00:15:13.434 { 00:15:13.434 "name": "BaseBdev3", 00:15:13.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.434 "is_configured": false, 00:15:13.434 "data_offset": 0, 00:15:13.434 "data_size": 0 00:15:13.434 }, 00:15:13.434 { 00:15:13.434 "name": "BaseBdev4", 00:15:13.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.434 "is_configured": false, 00:15:13.434 "data_offset": 0, 00:15:13.434 "data_size": 0 00:15:13.434 } 00:15:13.434 ] 00:15:13.434 }' 00:15:13.434 20:41:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.434 20:41:56 -- common/autotest_common.sh@10 -- # set +x 00:15:14.003 20:41:57 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:14.262 [2024-04-15 20:41:57.601044] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.262 [2024-04-15 20:41:57.601097] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:15:14.262 20:41:57 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:14.262 20:41:57 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:14.521 [2024-04-15 20:41:57.796812] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.521 [2024-04-15 20:41:57.798478] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.521 [2024-04-15 20:41:57.798549] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.521 [2024-04-15 20:41:57.798568] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.521 [2024-04-15 20:41:57.798590] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.521 [2024-04-15 20:41:57.798598] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:14.521 [2024-04-15 20:41:57.798613] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.521 20:41:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.521 "name": "Existed_Raid", 00:15:14.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.521 "strip_size_kb": 64, 00:15:14.521 "state": "configuring", 00:15:14.521 "raid_level": "concat", 00:15:14.521 "superblock": false, 00:15:14.521 "num_base_bdevs": 4, 00:15:14.521 "num_base_bdevs_discovered": 1, 00:15:14.521 "num_base_bdevs_operational": 4, 00:15:14.521 "base_bdevs_list": [ 00:15:14.521 { 00:15:14.521 "name": "BaseBdev1", 00:15:14.521 "uuid": "cea27c94-4562-41a3-a547-41b22c478c35", 00:15:14.521 "is_configured": true, 00:15:14.521 "data_offset": 0, 00:15:14.521 "data_size": 65536 00:15:14.521 }, 00:15:14.521 { 00:15:14.521 "name": "BaseBdev2", 00:15:14.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.521 "is_configured": false, 00:15:14.521 "data_offset": 0, 00:15:14.521 "data_size": 0 00:15:14.521 }, 00:15:14.521 { 00:15:14.521 "name": "BaseBdev3", 00:15:14.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.522 "is_configured": false, 00:15:14.522 "data_offset": 0, 00:15:14.522 "data_size": 0 00:15:14.522 }, 00:15:14.522 { 00:15:14.522 "name": "BaseBdev4", 00:15:14.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.522 "is_configured": false, 00:15:14.522 "data_offset": 0, 00:15:14.522 "data_size": 0 00:15:14.522 } 00:15:14.522 ] 00:15:14.522 }' 00:15:14.522 20:41:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.522 20:41:57 -- common/autotest_common.sh@10 -- # set +x 00:15:15.089 20:41:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:15.349 [2024-04-15 20:41:58.731187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.349 BaseBdev2 00:15:15.349 20:41:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:15.349 20:41:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:15.349 20:41:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:15.349 20:41:58 -- common/autotest_common.sh@889 -- # local i 00:15:15.349 20:41:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:15.349 20:41:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:15.349 20:41:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.608 20:41:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:15.608 [ 00:15:15.608 { 00:15:15.608 "name": "BaseBdev2", 00:15:15.608 "aliases": [ 00:15:15.608 "74505293-e2ef-4c4c-8ce7-77dc3126f3e1" 00:15:15.608 ], 00:15:15.608 "product_name": "Malloc disk", 00:15:15.608 "block_size": 512, 00:15:15.608 "num_blocks": 65536, 00:15:15.608 "uuid": "74505293-e2ef-4c4c-8ce7-77dc3126f3e1", 00:15:15.608 "assigned_rate_limits": { 00:15:15.608 "rw_ios_per_sec": 0, 00:15:15.608 "rw_mbytes_per_sec": 0, 00:15:15.608 "r_mbytes_per_sec": 0, 00:15:15.608 "w_mbytes_per_sec": 0 00:15:15.608 }, 00:15:15.608 "claimed": true, 00:15:15.608 "claim_type": "exclusive_write", 00:15:15.608 "zoned": false, 00:15:15.608 "supported_io_types": { 00:15:15.608 "read": true, 00:15:15.608 "write": true, 00:15:15.608 "unmap": true, 00:15:15.608 "write_zeroes": true, 00:15:15.608 "flush": true, 00:15:15.608 "reset": true, 00:15:15.608 "compare": false, 00:15:15.608 "compare_and_write": false, 00:15:15.608 "abort": true, 00:15:15.608 "nvme_admin": false, 00:15:15.608 "nvme_io": false 00:15:15.608 }, 00:15:15.608 "memory_domains": [ 00:15:15.608 { 00:15:15.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.608 "dma_device_type": 2 00:15:15.608 } 00:15:15.608 ], 00:15:15.608 "driver_specific": {} 00:15:15.608 } 00:15:15.608 ] 00:15:15.608 20:41:59 -- common/autotest_common.sh@895 -- # return 0 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.608 20:41:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.868 20:41:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.868 "name": "Existed_Raid", 00:15:15.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.868 "strip_size_kb": 64, 00:15:15.868 "state": "configuring", 00:15:15.868 "raid_level": "concat", 00:15:15.868 "superblock": false, 00:15:15.868 "num_base_bdevs": 4, 00:15:15.868 "num_base_bdevs_discovered": 2, 00:15:15.868 "num_base_bdevs_operational": 4, 00:15:15.868 "base_bdevs_list": [ 00:15:15.868 { 00:15:15.868 "name": "BaseBdev1", 00:15:15.868 "uuid": "cea27c94-4562-41a3-a547-41b22c478c35", 00:15:15.868 "is_configured": true, 00:15:15.868 "data_offset": 0, 00:15:15.868 "data_size": 65536 00:15:15.868 }, 00:15:15.868 { 00:15:15.868 "name": "BaseBdev2", 00:15:15.868 "uuid": "74505293-e2ef-4c4c-8ce7-77dc3126f3e1", 00:15:15.868 "is_configured": true, 00:15:15.868 "data_offset": 0, 00:15:15.868 "data_size": 65536 00:15:15.868 }, 00:15:15.868 { 00:15:15.868 "name": "BaseBdev3", 00:15:15.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.868 "is_configured": false, 00:15:15.868 "data_offset": 0, 00:15:15.868 "data_size": 0 00:15:15.868 }, 00:15:15.868 { 00:15:15.868 "name": "BaseBdev4", 00:15:15.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.868 "is_configured": false, 00:15:15.868 "data_offset": 0, 00:15:15.868 "data_size": 0 00:15:15.868 } 00:15:15.868 ] 00:15:15.868 }' 00:15:15.868 20:41:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.868 20:41:59 -- common/autotest_common.sh@10 -- # set +x 00:15:16.437 20:41:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:16.696 BaseBdev3 00:15:16.696 [2024-04-15 20:42:00.019087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.696 20:42:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:16.696 20:42:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:16.696 20:42:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:16.696 20:42:00 -- common/autotest_common.sh@889 -- # local i 00:15:16.696 20:42:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:16.696 20:42:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:16.696 20:42:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.696 20:42:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:16.955 [ 00:15:16.955 { 00:15:16.955 "name": "BaseBdev3", 00:15:16.955 "aliases": [ 00:15:16.955 "c69d627e-ec40-43a4-86f3-077f44853e11" 00:15:16.955 ], 00:15:16.955 "product_name": "Malloc disk", 00:15:16.955 "block_size": 512, 00:15:16.955 "num_blocks": 65536, 00:15:16.955 "uuid": "c69d627e-ec40-43a4-86f3-077f44853e11", 00:15:16.955 "assigned_rate_limits": { 00:15:16.955 "rw_ios_per_sec": 0, 00:15:16.955 "rw_mbytes_per_sec": 0, 00:15:16.955 "r_mbytes_per_sec": 0, 00:15:16.955 "w_mbytes_per_sec": 0 00:15:16.955 }, 00:15:16.955 "claimed": true, 00:15:16.955 "claim_type": "exclusive_write", 00:15:16.955 "zoned": false, 00:15:16.955 "supported_io_types": { 00:15:16.955 "read": true, 00:15:16.955 "write": true, 00:15:16.955 "unmap": true, 00:15:16.955 "write_zeroes": true, 00:15:16.955 "flush": true, 00:15:16.955 "reset": true, 00:15:16.955 "compare": false, 00:15:16.955 "compare_and_write": false, 00:15:16.955 "abort": true, 00:15:16.955 "nvme_admin": false, 00:15:16.955 "nvme_io": false 00:15:16.955 }, 00:15:16.955 "memory_domains": [ 00:15:16.955 { 00:15:16.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.955 "dma_device_type": 2 00:15:16.955 } 00:15:16.955 ], 00:15:16.955 "driver_specific": {} 00:15:16.955 } 00:15:16.955 ] 00:15:16.956 20:42:00 -- common/autotest_common.sh@895 -- # return 0 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.956 20:42:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.215 20:42:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:17.215 "name": "Existed_Raid", 00:15:17.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.215 "strip_size_kb": 64, 00:15:17.215 "state": "configuring", 00:15:17.215 "raid_level": "concat", 00:15:17.215 "superblock": false, 00:15:17.215 "num_base_bdevs": 4, 00:15:17.215 "num_base_bdevs_discovered": 3, 00:15:17.215 "num_base_bdevs_operational": 4, 00:15:17.215 "base_bdevs_list": [ 00:15:17.215 { 00:15:17.215 "name": "BaseBdev1", 00:15:17.215 "uuid": "cea27c94-4562-41a3-a547-41b22c478c35", 00:15:17.215 "is_configured": true, 00:15:17.215 "data_offset": 0, 00:15:17.215 "data_size": 65536 00:15:17.215 }, 00:15:17.215 { 00:15:17.215 "name": "BaseBdev2", 00:15:17.215 "uuid": "74505293-e2ef-4c4c-8ce7-77dc3126f3e1", 00:15:17.215 "is_configured": true, 00:15:17.215 "data_offset": 0, 00:15:17.215 "data_size": 65536 00:15:17.215 }, 00:15:17.215 { 00:15:17.215 "name": "BaseBdev3", 00:15:17.215 "uuid": "c69d627e-ec40-43a4-86f3-077f44853e11", 00:15:17.215 "is_configured": true, 00:15:17.215 "data_offset": 0, 00:15:17.215 "data_size": 65536 00:15:17.215 }, 00:15:17.215 { 00:15:17.215 "name": "BaseBdev4", 00:15:17.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.215 "is_configured": false, 00:15:17.215 "data_offset": 0, 00:15:17.215 "data_size": 0 00:15:17.215 } 00:15:17.215 ] 00:15:17.215 }' 00:15:17.215 20:42:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:17.215 20:42:00 -- common/autotest_common.sh@10 -- # set +x 00:15:17.803 20:42:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:17.803 [2024-04-15 20:42:01.259035] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:17.803 [2024-04-15 20:42:01.259072] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:15:17.803 [2024-04-15 20:42:01.259080] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:17.803 [2024-04-15 20:42:01.259187] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:17.803 [2024-04-15 20:42:01.259366] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:15:17.803 [2024-04-15 20:42:01.259376] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:15:17.803 [2024-04-15 20:42:01.259524] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.803 BaseBdev4 00:15:17.803 20:42:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:15:17.803 20:42:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:15:17.803 20:42:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:17.803 20:42:01 -- common/autotest_common.sh@889 -- # local i 00:15:17.803 20:42:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:17.803 20:42:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:17.803 20:42:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:18.061 20:42:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:18.321 [ 00:15:18.321 { 00:15:18.321 "name": "BaseBdev4", 00:15:18.321 "aliases": [ 00:15:18.321 "c3e8f526-c3d8-4f01-824b-8c110342e3d6" 00:15:18.321 ], 00:15:18.321 "product_name": "Malloc disk", 00:15:18.321 "block_size": 512, 00:15:18.321 "num_blocks": 65536, 00:15:18.321 "uuid": "c3e8f526-c3d8-4f01-824b-8c110342e3d6", 00:15:18.321 "assigned_rate_limits": { 00:15:18.321 "rw_ios_per_sec": 0, 00:15:18.321 "rw_mbytes_per_sec": 0, 00:15:18.321 "r_mbytes_per_sec": 0, 00:15:18.321 "w_mbytes_per_sec": 0 00:15:18.321 }, 00:15:18.321 "claimed": true, 00:15:18.321 "claim_type": "exclusive_write", 00:15:18.321 "zoned": false, 00:15:18.321 "supported_io_types": { 00:15:18.321 "read": true, 00:15:18.321 "write": true, 00:15:18.321 "unmap": true, 00:15:18.321 "write_zeroes": true, 00:15:18.321 "flush": true, 00:15:18.321 "reset": true, 00:15:18.321 "compare": false, 00:15:18.321 "compare_and_write": false, 00:15:18.321 "abort": true, 00:15:18.321 "nvme_admin": false, 00:15:18.321 "nvme_io": false 00:15:18.321 }, 00:15:18.321 "memory_domains": [ 00:15:18.321 { 00:15:18.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.321 "dma_device_type": 2 00:15:18.321 } 00:15:18.321 ], 00:15:18.321 "driver_specific": {} 00:15:18.321 } 00:15:18.321 ] 00:15:18.321 20:42:01 -- common/autotest_common.sh@895 -- # return 0 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.321 "name": "Existed_Raid", 00:15:18.321 "uuid": "10731dd8-c5e3-4488-bd17-d3e660f1343b", 00:15:18.321 "strip_size_kb": 64, 00:15:18.321 "state": "online", 00:15:18.321 "raid_level": "concat", 00:15:18.321 "superblock": false, 00:15:18.321 "num_base_bdevs": 4, 00:15:18.321 "num_base_bdevs_discovered": 4, 00:15:18.321 "num_base_bdevs_operational": 4, 00:15:18.321 "base_bdevs_list": [ 00:15:18.321 { 00:15:18.321 "name": "BaseBdev1", 00:15:18.321 "uuid": "cea27c94-4562-41a3-a547-41b22c478c35", 00:15:18.321 "is_configured": true, 00:15:18.321 "data_offset": 0, 00:15:18.321 "data_size": 65536 00:15:18.321 }, 00:15:18.321 { 00:15:18.321 "name": "BaseBdev2", 00:15:18.321 "uuid": "74505293-e2ef-4c4c-8ce7-77dc3126f3e1", 00:15:18.321 "is_configured": true, 00:15:18.321 "data_offset": 0, 00:15:18.321 "data_size": 65536 00:15:18.321 }, 00:15:18.321 { 00:15:18.321 "name": "BaseBdev3", 00:15:18.321 "uuid": "c69d627e-ec40-43a4-86f3-077f44853e11", 00:15:18.321 "is_configured": true, 00:15:18.321 "data_offset": 0, 00:15:18.321 "data_size": 65536 00:15:18.321 }, 00:15:18.321 { 00:15:18.321 "name": "BaseBdev4", 00:15:18.321 "uuid": "c3e8f526-c3d8-4f01-824b-8c110342e3d6", 00:15:18.321 "is_configured": true, 00:15:18.321 "data_offset": 0, 00:15:18.321 "data_size": 65536 00:15:18.321 } 00:15:18.321 ] 00:15:18.321 }' 00:15:18.321 20:42:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.321 20:42:01 -- common/autotest_common.sh@10 -- # set +x 00:15:18.890 20:42:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:19.149 [2024-04-15 20:42:02.485493] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:19.149 [2024-04-15 20:42:02.485523] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.149 [2024-04-15 20:42:02.485570] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.149 20:42:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.408 20:42:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.408 "name": "Existed_Raid", 00:15:19.408 "uuid": "10731dd8-c5e3-4488-bd17-d3e660f1343b", 00:15:19.408 "strip_size_kb": 64, 00:15:19.408 "state": "offline", 00:15:19.408 "raid_level": "concat", 00:15:19.408 "superblock": false, 00:15:19.408 "num_base_bdevs": 4, 00:15:19.408 "num_base_bdevs_discovered": 3, 00:15:19.408 "num_base_bdevs_operational": 3, 00:15:19.408 "base_bdevs_list": [ 00:15:19.408 { 00:15:19.408 "name": null, 00:15:19.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.408 "is_configured": false, 00:15:19.408 "data_offset": 0, 00:15:19.408 "data_size": 65536 00:15:19.408 }, 00:15:19.408 { 00:15:19.408 "name": "BaseBdev2", 00:15:19.408 "uuid": "74505293-e2ef-4c4c-8ce7-77dc3126f3e1", 00:15:19.408 "is_configured": true, 00:15:19.408 "data_offset": 0, 00:15:19.408 "data_size": 65536 00:15:19.408 }, 00:15:19.408 { 00:15:19.408 "name": "BaseBdev3", 00:15:19.408 "uuid": "c69d627e-ec40-43a4-86f3-077f44853e11", 00:15:19.408 "is_configured": true, 00:15:19.408 "data_offset": 0, 00:15:19.408 "data_size": 65536 00:15:19.408 }, 00:15:19.408 { 00:15:19.408 "name": "BaseBdev4", 00:15:19.408 "uuid": "c3e8f526-c3d8-4f01-824b-8c110342e3d6", 00:15:19.408 "is_configured": true, 00:15:19.408 "data_offset": 0, 00:15:19.408 "data_size": 65536 00:15:19.408 } 00:15:19.408 ] 00:15:19.408 }' 00:15:19.408 20:42:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.408 20:42:02 -- common/autotest_common.sh@10 -- # set +x 00:15:19.976 20:42:03 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:19.976 20:42:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:19.976 20:42:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.976 20:42:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:19.976 20:42:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:19.976 20:42:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:19.976 20:42:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:20.234 [2024-04-15 20:42:03.541876] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:20.234 20:42:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:20.234 20:42:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:20.234 20:42:03 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.234 20:42:03 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:20.493 20:42:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:20.493 20:42:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:20.493 20:42:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:20.493 [2024-04-15 20:42:03.962992] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:20.753 20:42:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:20.753 20:42:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:20.753 20:42:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.753 20:42:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:20.753 20:42:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:20.753 20:42:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:20.753 20:42:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:21.012 [2024-04-15 20:42:04.441832] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:21.012 [2024-04-15 20:42:04.441883] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:15:21.272 20:42:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:21.272 20:42:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:21.272 20:42:04 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:21.272 20:42:04 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.272 20:42:04 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:21.272 20:42:04 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:21.272 20:42:04 -- bdev/bdev_raid.sh@287 -- # killprocess 53658 00:15:21.272 20:42:04 -- common/autotest_common.sh@926 -- # '[' -z 53658 ']' 00:15:21.272 20:42:04 -- common/autotest_common.sh@930 -- # kill -0 53658 00:15:21.272 20:42:04 -- common/autotest_common.sh@931 -- # uname 00:15:21.272 20:42:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:21.272 20:42:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 53658 00:15:21.272 killing process with pid 53658 00:15:21.272 20:42:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:21.272 20:42:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:21.272 20:42:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 53658' 00:15:21.272 20:42:04 -- common/autotest_common.sh@945 -- # kill 53658 00:15:21.272 [2024-04-15 20:42:04.741929] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.272 20:42:04 -- common/autotest_common.sh@950 -- # wait 53658 00:15:21.272 [2024-04-15 20:42:04.742034] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.653 20:42:05 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:22.653 00:15:22.653 real 0m12.099s 00:15:22.653 user 0m20.965s 00:15:22.653 sys 0m1.579s 00:15:22.653 20:42:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.653 20:42:05 -- common/autotest_common.sh@10 -- # set +x 00:15:22.653 ************************************ 00:15:22.653 END TEST raid_state_function_test 00:15:22.653 ************************************ 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:15:22.653 20:42:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:22.653 20:42:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:22.653 20:42:06 -- common/autotest_common.sh@10 -- # set +x 00:15:22.653 ************************************ 00:15:22.653 START TEST raid_state_function_test_sb 00:15:22.653 ************************************ 00:15:22.653 20:42:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:22.653 Process raid pid: 54074 00:15:22.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=54074 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 54074' 00:15:22.653 20:42:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 54074 /var/tmp/spdk-raid.sock 00:15:22.654 20:42:06 -- common/autotest_common.sh@819 -- # '[' -z 54074 ']' 00:15:22.654 20:42:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:22.654 20:42:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:22.654 20:42:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:22.654 20:42:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:22.654 20:42:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:22.654 20:42:06 -- common/autotest_common.sh@10 -- # set +x 00:15:22.916 [2024-04-15 20:42:06.204869] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:22.916 [2024-04-15 20:42:06.205012] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.916 [2024-04-15 20:42:06.384943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.228 [2024-04-15 20:42:06.591816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.486 [2024-04-15 20:42:06.786341] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.424 20:42:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:24.424 20:42:07 -- common/autotest_common.sh@852 -- # return 0 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:24.424 [2024-04-15 20:42:07.720917] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.424 [2024-04-15 20:42:07.720976] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.424 [2024-04-15 20:42:07.720987] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.424 [2024-04-15 20:42:07.721003] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.424 [2024-04-15 20:42:07.721010] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.424 [2024-04-15 20:42:07.721048] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.424 [2024-04-15 20:42:07.721056] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.424 [2024-04-15 20:42:07.721076] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.424 "name": "Existed_Raid", 00:15:24.424 "uuid": "bbb5f0d5-2016-46e8-acf8-2ad7a47b0c9a", 00:15:24.424 "strip_size_kb": 64, 00:15:24.424 "state": "configuring", 00:15:24.424 "raid_level": "concat", 00:15:24.424 "superblock": true, 00:15:24.424 "num_base_bdevs": 4, 00:15:24.424 "num_base_bdevs_discovered": 0, 00:15:24.424 "num_base_bdevs_operational": 4, 00:15:24.424 "base_bdevs_list": [ 00:15:24.424 { 00:15:24.424 "name": "BaseBdev1", 00:15:24.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.424 "is_configured": false, 00:15:24.424 "data_offset": 0, 00:15:24.424 "data_size": 0 00:15:24.424 }, 00:15:24.424 { 00:15:24.424 "name": "BaseBdev2", 00:15:24.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.424 "is_configured": false, 00:15:24.424 "data_offset": 0, 00:15:24.424 "data_size": 0 00:15:24.424 }, 00:15:24.424 { 00:15:24.424 "name": "BaseBdev3", 00:15:24.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.424 "is_configured": false, 00:15:24.424 "data_offset": 0, 00:15:24.424 "data_size": 0 00:15:24.424 }, 00:15:24.424 { 00:15:24.424 "name": "BaseBdev4", 00:15:24.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.424 "is_configured": false, 00:15:24.424 "data_offset": 0, 00:15:24.424 "data_size": 0 00:15:24.424 } 00:15:24.424 ] 00:15:24.424 }' 00:15:24.424 20:42:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.424 20:42:07 -- common/autotest_common.sh@10 -- # set +x 00:15:24.992 20:42:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:25.254 [2024-04-15 20:42:08.567487] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.255 [2024-04-15 20:42:08.567521] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:15:25.255 20:42:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:25.255 [2024-04-15 20:42:08.735330] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.255 [2024-04-15 20:42:08.735378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.255 [2024-04-15 20:42:08.735387] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.255 [2024-04-15 20:42:08.735415] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.255 [2024-04-15 20:42:08.735423] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.255 [2024-04-15 20:42:08.735443] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.255 [2024-04-15 20:42:08.735450] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:25.255 [2024-04-15 20:42:08.735470] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:25.255 20:42:08 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.515 [2024-04-15 20:42:08.937358] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.515 BaseBdev1 00:15:25.515 20:42:08 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:25.515 20:42:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:25.515 20:42:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:25.515 20:42:08 -- common/autotest_common.sh@889 -- # local i 00:15:25.515 20:42:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:25.515 20:42:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:25.515 20:42:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:25.774 20:42:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.774 [ 00:15:25.774 { 00:15:25.774 "name": "BaseBdev1", 00:15:25.774 "aliases": [ 00:15:25.774 "62491066-3d12-40a8-8348-7787f3c5ce01" 00:15:25.774 ], 00:15:25.774 "product_name": "Malloc disk", 00:15:25.774 "block_size": 512, 00:15:25.774 "num_blocks": 65536, 00:15:25.774 "uuid": "62491066-3d12-40a8-8348-7787f3c5ce01", 00:15:25.774 "assigned_rate_limits": { 00:15:25.774 "rw_ios_per_sec": 0, 00:15:25.774 "rw_mbytes_per_sec": 0, 00:15:25.774 "r_mbytes_per_sec": 0, 00:15:25.774 "w_mbytes_per_sec": 0 00:15:25.774 }, 00:15:25.774 "claimed": true, 00:15:25.774 "claim_type": "exclusive_write", 00:15:25.774 "zoned": false, 00:15:25.774 "supported_io_types": { 00:15:25.774 "read": true, 00:15:25.774 "write": true, 00:15:25.774 "unmap": true, 00:15:25.774 "write_zeroes": true, 00:15:25.774 "flush": true, 00:15:25.774 "reset": true, 00:15:25.774 "compare": false, 00:15:25.774 "compare_and_write": false, 00:15:25.774 "abort": true, 00:15:25.774 "nvme_admin": false, 00:15:25.774 "nvme_io": false 00:15:25.774 }, 00:15:25.774 "memory_domains": [ 00:15:25.774 { 00:15:25.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.774 "dma_device_type": 2 00:15:25.774 } 00:15:25.774 ], 00:15:25.774 "driver_specific": {} 00:15:25.774 } 00:15:25.774 ] 00:15:25.774 20:42:09 -- common/autotest_common.sh@895 -- # return 0 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.774 20:42:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.034 20:42:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.034 "name": "Existed_Raid", 00:15:26.034 "uuid": "6f876c3a-8d4a-4d80-96d4-c024ff67f03f", 00:15:26.034 "strip_size_kb": 64, 00:15:26.034 "state": "configuring", 00:15:26.034 "raid_level": "concat", 00:15:26.034 "superblock": true, 00:15:26.034 "num_base_bdevs": 4, 00:15:26.034 "num_base_bdevs_discovered": 1, 00:15:26.034 "num_base_bdevs_operational": 4, 00:15:26.034 "base_bdevs_list": [ 00:15:26.034 { 00:15:26.034 "name": "BaseBdev1", 00:15:26.034 "uuid": "62491066-3d12-40a8-8348-7787f3c5ce01", 00:15:26.034 "is_configured": true, 00:15:26.034 "data_offset": 2048, 00:15:26.034 "data_size": 63488 00:15:26.034 }, 00:15:26.034 { 00:15:26.034 "name": "BaseBdev2", 00:15:26.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.034 "is_configured": false, 00:15:26.034 "data_offset": 0, 00:15:26.034 "data_size": 0 00:15:26.034 }, 00:15:26.034 { 00:15:26.034 "name": "BaseBdev3", 00:15:26.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.034 "is_configured": false, 00:15:26.034 "data_offset": 0, 00:15:26.034 "data_size": 0 00:15:26.034 }, 00:15:26.034 { 00:15:26.034 "name": "BaseBdev4", 00:15:26.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.034 "is_configured": false, 00:15:26.034 "data_offset": 0, 00:15:26.034 "data_size": 0 00:15:26.034 } 00:15:26.034 ] 00:15:26.034 }' 00:15:26.034 20:42:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.034 20:42:09 -- common/autotest_common.sh@10 -- # set +x 00:15:26.600 20:42:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:26.859 [2024-04-15 20:42:10.155543] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.859 [2024-04-15 20:42:10.155588] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:15:26.859 20:42:10 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:26.859 20:42:10 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:27.117 20:42:10 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:27.117 BaseBdev1 00:15:27.117 20:42:10 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:27.117 20:42:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:27.117 20:42:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:27.117 20:42:10 -- common/autotest_common.sh@889 -- # local i 00:15:27.117 20:42:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:27.117 20:42:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:27.117 20:42:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:27.375 20:42:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.375 [ 00:15:27.375 { 00:15:27.375 "name": "BaseBdev1", 00:15:27.375 "aliases": [ 00:15:27.375 "c87b8c2d-1b13-49ba-a55c-1b376e3d067a" 00:15:27.375 ], 00:15:27.375 "product_name": "Malloc disk", 00:15:27.375 "block_size": 512, 00:15:27.375 "num_blocks": 65536, 00:15:27.375 "uuid": "c87b8c2d-1b13-49ba-a55c-1b376e3d067a", 00:15:27.375 "assigned_rate_limits": { 00:15:27.375 "rw_ios_per_sec": 0, 00:15:27.375 "rw_mbytes_per_sec": 0, 00:15:27.375 "r_mbytes_per_sec": 0, 00:15:27.375 "w_mbytes_per_sec": 0 00:15:27.375 }, 00:15:27.375 "claimed": false, 00:15:27.375 "zoned": false, 00:15:27.375 "supported_io_types": { 00:15:27.375 "read": true, 00:15:27.375 "write": true, 00:15:27.375 "unmap": true, 00:15:27.375 "write_zeroes": true, 00:15:27.375 "flush": true, 00:15:27.375 "reset": true, 00:15:27.375 "compare": false, 00:15:27.375 "compare_and_write": false, 00:15:27.375 "abort": true, 00:15:27.375 "nvme_admin": false, 00:15:27.375 "nvme_io": false 00:15:27.375 }, 00:15:27.375 "memory_domains": [ 00:15:27.375 { 00:15:27.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.375 "dma_device_type": 2 00:15:27.375 } 00:15:27.375 ], 00:15:27.375 "driver_specific": {} 00:15:27.375 } 00:15:27.375 ] 00:15:27.634 20:42:10 -- common/autotest_common.sh@895 -- # return 0 00:15:27.634 20:42:10 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:27.634 [2024-04-15 20:42:11.010900] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.634 [2024-04-15 20:42:11.012061] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.634 [2024-04-15 20:42:11.012124] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.634 [2024-04-15 20:42:11.012133] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:27.634 [2024-04-15 20:42:11.012155] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:27.634 [2024-04-15 20:42:11.012163] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:27.634 [2024-04-15 20:42:11.012178] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.634 20:42:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.893 20:42:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:27.893 "name": "Existed_Raid", 00:15:27.893 "uuid": "8d277769-82bc-4365-8bc5-ba91457331be", 00:15:27.893 "strip_size_kb": 64, 00:15:27.893 "state": "configuring", 00:15:27.893 "raid_level": "concat", 00:15:27.893 "superblock": true, 00:15:27.893 "num_base_bdevs": 4, 00:15:27.893 "num_base_bdevs_discovered": 1, 00:15:27.893 "num_base_bdevs_operational": 4, 00:15:27.893 "base_bdevs_list": [ 00:15:27.893 { 00:15:27.893 "name": "BaseBdev1", 00:15:27.893 "uuid": "c87b8c2d-1b13-49ba-a55c-1b376e3d067a", 00:15:27.893 "is_configured": true, 00:15:27.893 "data_offset": 2048, 00:15:27.893 "data_size": 63488 00:15:27.893 }, 00:15:27.893 { 00:15:27.893 "name": "BaseBdev2", 00:15:27.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.893 "is_configured": false, 00:15:27.893 "data_offset": 0, 00:15:27.893 "data_size": 0 00:15:27.893 }, 00:15:27.893 { 00:15:27.893 "name": "BaseBdev3", 00:15:27.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.893 "is_configured": false, 00:15:27.893 "data_offset": 0, 00:15:27.893 "data_size": 0 00:15:27.893 }, 00:15:27.893 { 00:15:27.893 "name": "BaseBdev4", 00:15:27.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.893 "is_configured": false, 00:15:27.893 "data_offset": 0, 00:15:27.893 "data_size": 0 00:15:27.893 } 00:15:27.893 ] 00:15:27.893 }' 00:15:27.893 20:42:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:27.893 20:42:11 -- common/autotest_common.sh@10 -- # set +x 00:15:28.461 20:42:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:28.461 [2024-04-15 20:42:11.836506] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.461 BaseBdev2 00:15:28.461 20:42:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:28.461 20:42:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:28.461 20:42:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:28.461 20:42:11 -- common/autotest_common.sh@889 -- # local i 00:15:28.461 20:42:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:28.461 20:42:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:28.461 20:42:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.721 20:42:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.721 [ 00:15:28.721 { 00:15:28.721 "name": "BaseBdev2", 00:15:28.721 "aliases": [ 00:15:28.721 "c5da8eb8-6bf4-4a5f-8fb9-21b82168bb90" 00:15:28.721 ], 00:15:28.721 "product_name": "Malloc disk", 00:15:28.721 "block_size": 512, 00:15:28.721 "num_blocks": 65536, 00:15:28.721 "uuid": "c5da8eb8-6bf4-4a5f-8fb9-21b82168bb90", 00:15:28.721 "assigned_rate_limits": { 00:15:28.721 "rw_ios_per_sec": 0, 00:15:28.721 "rw_mbytes_per_sec": 0, 00:15:28.721 "r_mbytes_per_sec": 0, 00:15:28.721 "w_mbytes_per_sec": 0 00:15:28.721 }, 00:15:28.721 "claimed": true, 00:15:28.721 "claim_type": "exclusive_write", 00:15:28.721 "zoned": false, 00:15:28.721 "supported_io_types": { 00:15:28.721 "read": true, 00:15:28.721 "write": true, 00:15:28.721 "unmap": true, 00:15:28.721 "write_zeroes": true, 00:15:28.721 "flush": true, 00:15:28.721 "reset": true, 00:15:28.721 "compare": false, 00:15:28.721 "compare_and_write": false, 00:15:28.721 "abort": true, 00:15:28.721 "nvme_admin": false, 00:15:28.721 "nvme_io": false 00:15:28.721 }, 00:15:28.721 "memory_domains": [ 00:15:28.721 { 00:15:28.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.721 "dma_device_type": 2 00:15:28.721 } 00:15:28.721 ], 00:15:28.721 "driver_specific": {} 00:15:28.721 } 00:15:28.721 ] 00:15:28.721 20:42:12 -- common/autotest_common.sh@895 -- # return 0 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.721 20:42:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.722 20:42:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.980 20:42:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.980 "name": "Existed_Raid", 00:15:28.980 "uuid": "8d277769-82bc-4365-8bc5-ba91457331be", 00:15:28.980 "strip_size_kb": 64, 00:15:28.980 "state": "configuring", 00:15:28.980 "raid_level": "concat", 00:15:28.980 "superblock": true, 00:15:28.980 "num_base_bdevs": 4, 00:15:28.980 "num_base_bdevs_discovered": 2, 00:15:28.980 "num_base_bdevs_operational": 4, 00:15:28.980 "base_bdevs_list": [ 00:15:28.980 { 00:15:28.980 "name": "BaseBdev1", 00:15:28.980 "uuid": "c87b8c2d-1b13-49ba-a55c-1b376e3d067a", 00:15:28.980 "is_configured": true, 00:15:28.980 "data_offset": 2048, 00:15:28.980 "data_size": 63488 00:15:28.980 }, 00:15:28.980 { 00:15:28.980 "name": "BaseBdev2", 00:15:28.980 "uuid": "c5da8eb8-6bf4-4a5f-8fb9-21b82168bb90", 00:15:28.980 "is_configured": true, 00:15:28.980 "data_offset": 2048, 00:15:28.980 "data_size": 63488 00:15:28.980 }, 00:15:28.980 { 00:15:28.980 "name": "BaseBdev3", 00:15:28.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.980 "is_configured": false, 00:15:28.980 "data_offset": 0, 00:15:28.980 "data_size": 0 00:15:28.980 }, 00:15:28.980 { 00:15:28.980 "name": "BaseBdev4", 00:15:28.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.980 "is_configured": false, 00:15:28.980 "data_offset": 0, 00:15:28.980 "data_size": 0 00:15:28.980 } 00:15:28.980 ] 00:15:28.980 }' 00:15:28.980 20:42:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.980 20:42:12 -- common/autotest_common.sh@10 -- # set +x 00:15:29.548 20:42:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:29.808 [2024-04-15 20:42:13.075430] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.808 BaseBdev3 00:15:29.808 20:42:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:29.808 20:42:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:29.808 20:42:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:29.808 20:42:13 -- common/autotest_common.sh@889 -- # local i 00:15:29.808 20:42:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:29.808 20:42:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:29.808 20:42:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:29.808 20:42:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:30.067 [ 00:15:30.067 { 00:15:30.067 "name": "BaseBdev3", 00:15:30.067 "aliases": [ 00:15:30.067 "fd8fb2a7-af95-4576-a2f1-affc41409529" 00:15:30.067 ], 00:15:30.067 "product_name": "Malloc disk", 00:15:30.067 "block_size": 512, 00:15:30.067 "num_blocks": 65536, 00:15:30.067 "uuid": "fd8fb2a7-af95-4576-a2f1-affc41409529", 00:15:30.067 "assigned_rate_limits": { 00:15:30.067 "rw_ios_per_sec": 0, 00:15:30.067 "rw_mbytes_per_sec": 0, 00:15:30.067 "r_mbytes_per_sec": 0, 00:15:30.067 "w_mbytes_per_sec": 0 00:15:30.067 }, 00:15:30.067 "claimed": true, 00:15:30.067 "claim_type": "exclusive_write", 00:15:30.067 "zoned": false, 00:15:30.067 "supported_io_types": { 00:15:30.067 "read": true, 00:15:30.067 "write": true, 00:15:30.067 "unmap": true, 00:15:30.067 "write_zeroes": true, 00:15:30.067 "flush": true, 00:15:30.067 "reset": true, 00:15:30.067 "compare": false, 00:15:30.067 "compare_and_write": false, 00:15:30.067 "abort": true, 00:15:30.067 "nvme_admin": false, 00:15:30.067 "nvme_io": false 00:15:30.067 }, 00:15:30.067 "memory_domains": [ 00:15:30.067 { 00:15:30.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.067 "dma_device_type": 2 00:15:30.067 } 00:15:30.067 ], 00:15:30.067 "driver_specific": {} 00:15:30.067 } 00:15:30.067 ] 00:15:30.067 20:42:13 -- common/autotest_common.sh@895 -- # return 0 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.067 20:42:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.067 "name": "Existed_Raid", 00:15:30.067 "uuid": "8d277769-82bc-4365-8bc5-ba91457331be", 00:15:30.067 "strip_size_kb": 64, 00:15:30.067 "state": "configuring", 00:15:30.067 "raid_level": "concat", 00:15:30.067 "superblock": true, 00:15:30.067 "num_base_bdevs": 4, 00:15:30.067 "num_base_bdevs_discovered": 3, 00:15:30.067 "num_base_bdevs_operational": 4, 00:15:30.067 "base_bdevs_list": [ 00:15:30.067 { 00:15:30.067 "name": "BaseBdev1", 00:15:30.067 "uuid": "c87b8c2d-1b13-49ba-a55c-1b376e3d067a", 00:15:30.068 "is_configured": true, 00:15:30.068 "data_offset": 2048, 00:15:30.068 "data_size": 63488 00:15:30.068 }, 00:15:30.068 { 00:15:30.068 "name": "BaseBdev2", 00:15:30.068 "uuid": "c5da8eb8-6bf4-4a5f-8fb9-21b82168bb90", 00:15:30.068 "is_configured": true, 00:15:30.068 "data_offset": 2048, 00:15:30.068 "data_size": 63488 00:15:30.068 }, 00:15:30.068 { 00:15:30.068 "name": "BaseBdev3", 00:15:30.068 "uuid": "fd8fb2a7-af95-4576-a2f1-affc41409529", 00:15:30.068 "is_configured": true, 00:15:30.068 "data_offset": 2048, 00:15:30.068 "data_size": 63488 00:15:30.068 }, 00:15:30.068 { 00:15:30.068 "name": "BaseBdev4", 00:15:30.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.068 "is_configured": false, 00:15:30.068 "data_offset": 0, 00:15:30.068 "data_size": 0 00:15:30.068 } 00:15:30.068 ] 00:15:30.068 }' 00:15:30.068 20:42:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.068 20:42:13 -- common/autotest_common.sh@10 -- # set +x 00:15:30.635 20:42:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:30.895 [2024-04-15 20:42:14.278069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:30.895 [2024-04-15 20:42:14.278194] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000029180 00:15:30.895 [2024-04-15 20:42:14.278205] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:30.895 [2024-04-15 20:42:14.278298] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:30.895 [2024-04-15 20:42:14.278479] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000029180 00:15:30.895 [2024-04-15 20:42:14.278489] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000029180 00:15:30.895 [2024-04-15 20:42:14.278582] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.895 BaseBdev4 00:15:30.895 20:42:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:15:30.895 20:42:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:15:30.895 20:42:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:30.895 20:42:14 -- common/autotest_common.sh@889 -- # local i 00:15:30.895 20:42:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:30.895 20:42:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:30.895 20:42:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.155 20:42:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:31.155 [ 00:15:31.155 { 00:15:31.155 "name": "BaseBdev4", 00:15:31.155 "aliases": [ 00:15:31.155 "5f9a2ba5-d9b9-4fa9-8e03-71b2808f800b" 00:15:31.155 ], 00:15:31.155 "product_name": "Malloc disk", 00:15:31.155 "block_size": 512, 00:15:31.155 "num_blocks": 65536, 00:15:31.155 "uuid": "5f9a2ba5-d9b9-4fa9-8e03-71b2808f800b", 00:15:31.155 "assigned_rate_limits": { 00:15:31.155 "rw_ios_per_sec": 0, 00:15:31.155 "rw_mbytes_per_sec": 0, 00:15:31.155 "r_mbytes_per_sec": 0, 00:15:31.155 "w_mbytes_per_sec": 0 00:15:31.155 }, 00:15:31.155 "claimed": true, 00:15:31.155 "claim_type": "exclusive_write", 00:15:31.155 "zoned": false, 00:15:31.155 "supported_io_types": { 00:15:31.155 "read": true, 00:15:31.155 "write": true, 00:15:31.155 "unmap": true, 00:15:31.155 "write_zeroes": true, 00:15:31.155 "flush": true, 00:15:31.155 "reset": true, 00:15:31.155 "compare": false, 00:15:31.155 "compare_and_write": false, 00:15:31.155 "abort": true, 00:15:31.155 "nvme_admin": false, 00:15:31.155 "nvme_io": false 00:15:31.155 }, 00:15:31.155 "memory_domains": [ 00:15:31.155 { 00:15:31.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.155 "dma_device_type": 2 00:15:31.155 } 00:15:31.155 ], 00:15:31.155 "driver_specific": {} 00:15:31.155 } 00:15:31.155 ] 00:15:31.155 20:42:14 -- common/autotest_common.sh@895 -- # return 0 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.155 20:42:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.414 20:42:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.414 "name": "Existed_Raid", 00:15:31.414 "uuid": "8d277769-82bc-4365-8bc5-ba91457331be", 00:15:31.414 "strip_size_kb": 64, 00:15:31.414 "state": "online", 00:15:31.414 "raid_level": "concat", 00:15:31.414 "superblock": true, 00:15:31.414 "num_base_bdevs": 4, 00:15:31.414 "num_base_bdevs_discovered": 4, 00:15:31.414 "num_base_bdevs_operational": 4, 00:15:31.414 "base_bdevs_list": [ 00:15:31.414 { 00:15:31.414 "name": "BaseBdev1", 00:15:31.414 "uuid": "c87b8c2d-1b13-49ba-a55c-1b376e3d067a", 00:15:31.414 "is_configured": true, 00:15:31.414 "data_offset": 2048, 00:15:31.414 "data_size": 63488 00:15:31.414 }, 00:15:31.414 { 00:15:31.414 "name": "BaseBdev2", 00:15:31.414 "uuid": "c5da8eb8-6bf4-4a5f-8fb9-21b82168bb90", 00:15:31.414 "is_configured": true, 00:15:31.414 "data_offset": 2048, 00:15:31.414 "data_size": 63488 00:15:31.414 }, 00:15:31.414 { 00:15:31.414 "name": "BaseBdev3", 00:15:31.414 "uuid": "fd8fb2a7-af95-4576-a2f1-affc41409529", 00:15:31.414 "is_configured": true, 00:15:31.414 "data_offset": 2048, 00:15:31.414 "data_size": 63488 00:15:31.414 }, 00:15:31.414 { 00:15:31.414 "name": "BaseBdev4", 00:15:31.414 "uuid": "5f9a2ba5-d9b9-4fa9-8e03-71b2808f800b", 00:15:31.414 "is_configured": true, 00:15:31.414 "data_offset": 2048, 00:15:31.414 "data_size": 63488 00:15:31.414 } 00:15:31.414 ] 00:15:31.414 }' 00:15:31.414 20:42:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.414 20:42:14 -- common/autotest_common.sh@10 -- # set +x 00:15:31.982 20:42:15 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:31.982 [2024-04-15 20:42:15.396441] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.982 [2024-04-15 20:42:15.396469] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.982 [2024-04-15 20:42:15.396506] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.242 "name": "Existed_Raid", 00:15:32.242 "uuid": "8d277769-82bc-4365-8bc5-ba91457331be", 00:15:32.242 "strip_size_kb": 64, 00:15:32.242 "state": "offline", 00:15:32.242 "raid_level": "concat", 00:15:32.242 "superblock": true, 00:15:32.242 "num_base_bdevs": 4, 00:15:32.242 "num_base_bdevs_discovered": 3, 00:15:32.242 "num_base_bdevs_operational": 3, 00:15:32.242 "base_bdevs_list": [ 00:15:32.242 { 00:15:32.242 "name": null, 00:15:32.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.242 "is_configured": false, 00:15:32.242 "data_offset": 2048, 00:15:32.242 "data_size": 63488 00:15:32.242 }, 00:15:32.242 { 00:15:32.242 "name": "BaseBdev2", 00:15:32.242 "uuid": "c5da8eb8-6bf4-4a5f-8fb9-21b82168bb90", 00:15:32.242 "is_configured": true, 00:15:32.242 "data_offset": 2048, 00:15:32.242 "data_size": 63488 00:15:32.242 }, 00:15:32.242 { 00:15:32.242 "name": "BaseBdev3", 00:15:32.242 "uuid": "fd8fb2a7-af95-4576-a2f1-affc41409529", 00:15:32.242 "is_configured": true, 00:15:32.242 "data_offset": 2048, 00:15:32.242 "data_size": 63488 00:15:32.242 }, 00:15:32.242 { 00:15:32.242 "name": "BaseBdev4", 00:15:32.242 "uuid": "5f9a2ba5-d9b9-4fa9-8e03-71b2808f800b", 00:15:32.242 "is_configured": true, 00:15:32.242 "data_offset": 2048, 00:15:32.242 "data_size": 63488 00:15:32.242 } 00:15:32.242 ] 00:15:32.242 }' 00:15:32.242 20:42:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.242 20:42:15 -- common/autotest_common.sh@10 -- # set +x 00:15:32.810 20:42:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:32.810 20:42:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:32.810 20:42:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.810 20:42:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:33.069 20:42:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:33.069 20:42:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.069 20:42:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:33.069 [2024-04-15 20:42:16.533502] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:33.329 20:42:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:33.329 20:42:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:33.329 20:42:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.329 20:42:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:33.329 20:42:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:33.329 20:42:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.329 20:42:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:33.589 [2024-04-15 20:42:16.968295] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:33.589 20:42:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:33.589 20:42:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:33.589 20:42:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:33.589 20:42:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.848 20:42:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:33.848 20:42:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.848 20:42:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:34.107 [2024-04-15 20:42:17.447872] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:34.107 [2024-04-15 20:42:17.447917] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029180 name Existed_Raid, state offline 00:15:34.107 20:42:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:34.107 20:42:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:34.107 20:42:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.107 20:42:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:34.365 20:42:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:34.365 20:42:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:34.365 20:42:17 -- bdev/bdev_raid.sh@287 -- # killprocess 54074 00:15:34.365 20:42:17 -- common/autotest_common.sh@926 -- # '[' -z 54074 ']' 00:15:34.365 20:42:17 -- common/autotest_common.sh@930 -- # kill -0 54074 00:15:34.365 20:42:17 -- common/autotest_common.sh@931 -- # uname 00:15:34.365 20:42:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:34.365 20:42:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54074 00:15:34.365 killing process with pid 54074 00:15:34.365 20:42:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:34.365 20:42:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:34.365 20:42:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54074' 00:15:34.365 20:42:17 -- common/autotest_common.sh@945 -- # kill 54074 00:15:34.365 [2024-04-15 20:42:17.744711] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.365 20:42:17 -- common/autotest_common.sh@950 -- # wait 54074 00:15:34.366 [2024-04-15 20:42:17.744801] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.745 ************************************ 00:15:35.745 END TEST raid_state_function_test_sb 00:15:35.745 ************************************ 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:35.745 00:15:35.745 real 0m12.955s 00:15:35.745 user 0m21.943s 00:15:35.745 sys 0m1.647s 00:15:35.745 20:42:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:35.745 20:42:19 -- common/autotest_common.sh@10 -- # set +x 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:35.745 20:42:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:35.745 20:42:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:35.745 20:42:19 -- common/autotest_common.sh@10 -- # set +x 00:15:35.745 ************************************ 00:15:35.745 START TEST raid_superblock_test 00:15:35.745 ************************************ 00:15:35.745 20:42:19 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:35.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@357 -- # raid_pid=54508 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@358 -- # waitforlisten 54508 /var/tmp/spdk-raid.sock 00:15:35.745 20:42:19 -- common/autotest_common.sh@819 -- # '[' -z 54508 ']' 00:15:35.745 20:42:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:35.745 20:42:19 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:35.745 20:42:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:35.745 20:42:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:35.745 20:42:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:35.745 20:42:19 -- common/autotest_common.sh@10 -- # set +x 00:15:35.745 [2024-04-15 20:42:19.219276] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:35.745 [2024-04-15 20:42:19.219424] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid54508 ] 00:15:36.004 [2024-04-15 20:42:19.406054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.262 [2024-04-15 20:42:19.599417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.520 [2024-04-15 20:42:19.796338] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.088 20:42:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:37.088 20:42:20 -- common/autotest_common.sh@852 -- # return 0 00:15:37.088 20:42:20 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:37.088 20:42:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:37.088 20:42:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:37.088 20:42:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:37.088 20:42:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:37.088 20:42:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.088 20:42:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.088 20:42:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.088 20:42:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:37.346 malloc1 00:15:37.346 20:42:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:37.604 [2024-04-15 20:42:20.895020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:37.604 [2024-04-15 20:42:20.895105] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.604 [2024-04-15 20:42:20.895151] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:15:37.604 [2024-04-15 20:42:20.895187] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.604 [2024-04-15 20:42:20.896726] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.604 [2024-04-15 20:42:20.896766] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:37.604 pt1 00:15:37.604 20:42:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:37.604 20:42:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:37.604 20:42:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:37.604 20:42:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:37.604 20:42:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:37.604 20:42:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.604 20:42:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.604 20:42:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.604 20:42:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:37.604 malloc2 00:15:37.604 20:42:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.862 [2024-04-15 20:42:21.259554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.862 [2024-04-15 20:42:21.260076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.862 [2024-04-15 20:42:21.260195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:15:37.862 [2024-04-15 20:42:21.260293] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.862 pt2 00:15:37.862 [2024-04-15 20:42:21.264190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.862 [2024-04-15 20:42:21.264301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.862 20:42:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:37.862 20:42:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:37.863 20:42:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:37.863 20:42:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:37.863 20:42:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:37.863 20:42:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.863 20:42:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.863 20:42:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.863 20:42:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:38.121 malloc3 00:15:38.121 20:42:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.121 [2024-04-15 20:42:21.617735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.121 [2024-04-15 20:42:21.617820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.121 [2024-04-15 20:42:21.617866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:15:38.121 [2024-04-15 20:42:21.617901] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.121 [2024-04-15 20:42:21.619456] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.121 [2024-04-15 20:42:21.619496] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.121 pt3 00:15:38.379 20:42:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:38.379 20:42:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:38.379 20:42:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:15:38.379 20:42:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:15:38.379 20:42:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:38.379 20:42:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.379 20:42:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.380 20:42:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.380 20:42:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:38.380 malloc4 00:15:38.380 20:42:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:38.638 [2024-04-15 20:42:21.969346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:38.638 [2024-04-15 20:42:21.969424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.638 [2024-04-15 20:42:21.969462] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ca80 00:15:38.638 [2024-04-15 20:42:21.969515] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.638 [2024-04-15 20:42:21.971095] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.638 [2024-04-15 20:42:21.971145] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:38.638 pt4 00:15:38.638 20:42:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:38.638 20:42:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:38.638 20:42:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:38.638 [2024-04-15 20:42:22.133197] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:38.638 [2024-04-15 20:42:22.134566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.638 [2024-04-15 20:42:22.134609] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.638 [2024-04-15 20:42:22.134662] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:38.638 [2024-04-15 20:42:22.134774] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002df80 00:15:38.638 [2024-04-15 20:42:22.134785] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:38.638 [2024-04-15 20:42:22.134909] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:38.638 [2024-04-15 20:42:22.135177] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002df80 00:15:38.638 [2024-04-15 20:42:22.135197] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002df80 00:15:38.638 [2024-04-15 20:42:22.135329] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.896 20:42:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.896 "name": "raid_bdev1", 00:15:38.896 "uuid": "652ca579-d25f-47d2-9bb8-a66747c9c805", 00:15:38.896 "strip_size_kb": 64, 00:15:38.896 "state": "online", 00:15:38.896 "raid_level": "concat", 00:15:38.896 "superblock": true, 00:15:38.896 "num_base_bdevs": 4, 00:15:38.896 "num_base_bdevs_discovered": 4, 00:15:38.896 "num_base_bdevs_operational": 4, 00:15:38.896 "base_bdevs_list": [ 00:15:38.896 { 00:15:38.896 "name": "pt1", 00:15:38.896 "uuid": "23aebb9e-c4d9-5cd5-8dd2-ceb6cf3339c6", 00:15:38.896 "is_configured": true, 00:15:38.896 "data_offset": 2048, 00:15:38.896 "data_size": 63488 00:15:38.896 }, 00:15:38.896 { 00:15:38.896 "name": "pt2", 00:15:38.896 "uuid": "7063a836-c7e9-58f9-98a7-c39bd4d8967b", 00:15:38.896 "is_configured": true, 00:15:38.896 "data_offset": 2048, 00:15:38.896 "data_size": 63488 00:15:38.896 }, 00:15:38.896 { 00:15:38.896 "name": "pt3", 00:15:38.897 "uuid": "a71a4e09-0415-52ba-a7cc-1a3870b77459", 00:15:38.897 "is_configured": true, 00:15:38.897 "data_offset": 2048, 00:15:38.897 "data_size": 63488 00:15:38.897 }, 00:15:38.897 { 00:15:38.897 "name": "pt4", 00:15:38.897 "uuid": "cbe3d522-b759-5123-81a9-849a562f203d", 00:15:38.897 "is_configured": true, 00:15:38.897 "data_offset": 2048, 00:15:38.897 "data_size": 63488 00:15:38.897 } 00:15:38.897 ] 00:15:38.897 }' 00:15:38.897 20:42:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.897 20:42:22 -- common/autotest_common.sh@10 -- # set +x 00:15:39.576 20:42:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:39.577 20:42:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:39.854 [2024-04-15 20:42:23.039836] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.854 20:42:23 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=652ca579-d25f-47d2-9bb8-a66747c9c805 00:15:39.854 20:42:23 -- bdev/bdev_raid.sh@380 -- # '[' -z 652ca579-d25f-47d2-9bb8-a66747c9c805 ']' 00:15:39.854 20:42:23 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:39.854 [2024-04-15 20:42:23.195451] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.854 [2024-04-15 20:42:23.195482] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.854 [2024-04-15 20:42:23.195541] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.854 [2024-04-15 20:42:23.195580] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.854 [2024-04-15 20:42:23.195588] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002df80 name raid_bdev1, state offline 00:15:39.854 20:42:23 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.854 20:42:23 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:40.196 20:42:23 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:40.196 20:42:23 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:40.196 20:42:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:40.196 20:42:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:40.196 20:42:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:40.196 20:42:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:40.478 20:42:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:40.478 20:42:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:40.478 20:42:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:40.478 20:42:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:40.737 20:42:24 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:40.737 20:42:24 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:40.737 20:42:24 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:40.738 20:42:24 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:40.738 20:42:24 -- common/autotest_common.sh@640 -- # local es=0 00:15:40.738 20:42:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:40.738 20:42:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.738 20:42:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:40.738 20:42:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.738 20:42:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:40.738 20:42:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.738 20:42:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:40.738 20:42:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.738 20:42:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:40.738 20:42:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:41.002 [2024-04-15 20:42:24.325690] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:41.002 [2024-04-15 20:42:24.326988] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:41.002 [2024-04-15 20:42:24.327021] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:41.002 [2024-04-15 20:42:24.327039] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:41.002 [2024-04-15 20:42:24.327065] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:41.002 [2024-04-15 20:42:24.327119] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:41.002 [2024-04-15 20:42:24.327143] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:41.002 [2024-04-15 20:42:24.327181] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:15:41.002 [2024-04-15 20:42:24.327199] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.002 [2024-04-15 20:42:24.327208] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002e580 name raid_bdev1, state configuring 00:15:41.002 request: 00:15:41.002 { 00:15:41.002 "name": "raid_bdev1", 00:15:41.002 "raid_level": "concat", 00:15:41.002 "base_bdevs": [ 00:15:41.002 "malloc1", 00:15:41.002 "malloc2", 00:15:41.002 "malloc3", 00:15:41.002 "malloc4" 00:15:41.002 ], 00:15:41.002 "superblock": false, 00:15:41.002 "strip_size_kb": 64, 00:15:41.002 "method": "bdev_raid_create", 00:15:41.002 "req_id": 1 00:15:41.002 } 00:15:41.002 Got JSON-RPC error response 00:15:41.002 response: 00:15:41.002 { 00:15:41.002 "code": -17, 00:15:41.002 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:41.002 } 00:15:41.002 20:42:24 -- common/autotest_common.sh@643 -- # es=1 00:15:41.002 20:42:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:41.002 20:42:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:41.002 20:42:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:41.002 20:42:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.002 20:42:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:41.326 [2024-04-15 20:42:24.661168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:41.326 [2024-04-15 20:42:24.661233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.326 [2024-04-15 20:42:24.661290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:15:41.326 [2024-04-15 20:42:24.661312] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.326 [2024-04-15 20:42:24.662750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.326 [2024-04-15 20:42:24.662799] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:41.326 [2024-04-15 20:42:24.662878] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:41.326 [2024-04-15 20:42:24.662935] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:41.326 pt1 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.326 20:42:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.605 20:42:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.605 "name": "raid_bdev1", 00:15:41.605 "uuid": "652ca579-d25f-47d2-9bb8-a66747c9c805", 00:15:41.605 "strip_size_kb": 64, 00:15:41.605 "state": "configuring", 00:15:41.605 "raid_level": "concat", 00:15:41.605 "superblock": true, 00:15:41.605 "num_base_bdevs": 4, 00:15:41.605 "num_base_bdevs_discovered": 1, 00:15:41.605 "num_base_bdevs_operational": 4, 00:15:41.605 "base_bdevs_list": [ 00:15:41.605 { 00:15:41.605 "name": "pt1", 00:15:41.605 "uuid": "23aebb9e-c4d9-5cd5-8dd2-ceb6cf3339c6", 00:15:41.605 "is_configured": true, 00:15:41.605 "data_offset": 2048, 00:15:41.605 "data_size": 63488 00:15:41.605 }, 00:15:41.605 { 00:15:41.605 "name": null, 00:15:41.605 "uuid": "7063a836-c7e9-58f9-98a7-c39bd4d8967b", 00:15:41.605 "is_configured": false, 00:15:41.605 "data_offset": 2048, 00:15:41.605 "data_size": 63488 00:15:41.605 }, 00:15:41.605 { 00:15:41.605 "name": null, 00:15:41.605 "uuid": "a71a4e09-0415-52ba-a7cc-1a3870b77459", 00:15:41.605 "is_configured": false, 00:15:41.605 "data_offset": 2048, 00:15:41.605 "data_size": 63488 00:15:41.605 }, 00:15:41.605 { 00:15:41.605 "name": null, 00:15:41.605 "uuid": "cbe3d522-b759-5123-81a9-849a562f203d", 00:15:41.605 "is_configured": false, 00:15:41.605 "data_offset": 2048, 00:15:41.605 "data_size": 63488 00:15:41.605 } 00:15:41.605 ] 00:15:41.605 }' 00:15:41.605 20:42:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.605 20:42:24 -- common/autotest_common.sh@10 -- # set +x 00:15:41.921 20:42:25 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:15:41.921 20:42:25 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:42.252 [2024-04-15 20:42:25.499891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:42.252 [2024-04-15 20:42:25.499959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.252 [2024-04-15 20:42:25.500013] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031880 00:15:42.252 [2024-04-15 20:42:25.500034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.252 [2024-04-15 20:42:25.500308] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.252 [2024-04-15 20:42:25.500340] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:42.252 [2024-04-15 20:42:25.500416] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:42.252 [2024-04-15 20:42:25.500435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:42.252 pt2 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:42.252 [2024-04-15 20:42:25.643704] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.252 20:42:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.511 20:42:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.511 "name": "raid_bdev1", 00:15:42.511 "uuid": "652ca579-d25f-47d2-9bb8-a66747c9c805", 00:15:42.511 "strip_size_kb": 64, 00:15:42.511 "state": "configuring", 00:15:42.511 "raid_level": "concat", 00:15:42.511 "superblock": true, 00:15:42.511 "num_base_bdevs": 4, 00:15:42.511 "num_base_bdevs_discovered": 1, 00:15:42.511 "num_base_bdevs_operational": 4, 00:15:42.511 "base_bdevs_list": [ 00:15:42.511 { 00:15:42.511 "name": "pt1", 00:15:42.511 "uuid": "23aebb9e-c4d9-5cd5-8dd2-ceb6cf3339c6", 00:15:42.511 "is_configured": true, 00:15:42.511 "data_offset": 2048, 00:15:42.511 "data_size": 63488 00:15:42.511 }, 00:15:42.511 { 00:15:42.511 "name": null, 00:15:42.511 "uuid": "7063a836-c7e9-58f9-98a7-c39bd4d8967b", 00:15:42.511 "is_configured": false, 00:15:42.511 "data_offset": 2048, 00:15:42.511 "data_size": 63488 00:15:42.511 }, 00:15:42.511 { 00:15:42.511 "name": null, 00:15:42.511 "uuid": "a71a4e09-0415-52ba-a7cc-1a3870b77459", 00:15:42.511 "is_configured": false, 00:15:42.511 "data_offset": 2048, 00:15:42.511 "data_size": 63488 00:15:42.511 }, 00:15:42.511 { 00:15:42.511 "name": null, 00:15:42.511 "uuid": "cbe3d522-b759-5123-81a9-849a562f203d", 00:15:42.511 "is_configured": false, 00:15:42.511 "data_offset": 2048, 00:15:42.511 "data_size": 63488 00:15:42.511 } 00:15:42.511 ] 00:15:42.511 }' 00:15:42.511 20:42:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.511 20:42:25 -- common/autotest_common.sh@10 -- # set +x 00:15:43.080 20:42:26 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:43.080 20:42:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:43.080 20:42:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.080 [2024-04-15 20:42:26.446472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.080 [2024-04-15 20:42:26.446547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.080 [2024-04-15 20:42:26.446593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032d80 00:15:43.080 [2024-04-15 20:42:26.446610] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.080 [2024-04-15 20:42:26.447077] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.080 [2024-04-15 20:42:26.447124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.080 [2024-04-15 20:42:26.447196] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:43.080 [2024-04-15 20:42:26.447215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.080 pt2 00:15:43.080 20:42:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:43.080 20:42:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:43.080 20:42:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:43.340 [2024-04-15 20:42:26.594212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:43.340 [2024-04-15 20:42:26.594268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.340 [2024-04-15 20:42:26.594301] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034280 00:15:43.340 [2024-04-15 20:42:26.594323] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.340 [2024-04-15 20:42:26.594573] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.340 [2024-04-15 20:42:26.594610] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:43.340 [2024-04-15 20:42:26.594844] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:43.340 [2024-04-15 20:42:26.594877] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:43.340 pt3 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:43.340 [2024-04-15 20:42:26.745981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:43.340 [2024-04-15 20:42:26.746046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.340 [2024-04-15 20:42:26.746079] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035780 00:15:43.340 [2024-04-15 20:42:26.746101] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.340 [2024-04-15 20:42:26.746362] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.340 [2024-04-15 20:42:26.746396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:43.340 [2024-04-15 20:42:26.746460] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:15:43.340 [2024-04-15 20:42:26.746475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:43.340 [2024-04-15 20:42:26.746533] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000031280 00:15:43.340 [2024-04-15 20:42:26.746541] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:43.340 [2024-04-15 20:42:26.746600] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:43.340 [2024-04-15 20:42:26.746967] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000031280 00:15:43.340 [2024-04-15 20:42:26.746984] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000031280 00:15:43.340 [2024-04-15 20:42:26.747076] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.340 pt4 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.340 20:42:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.600 20:42:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.600 "name": "raid_bdev1", 00:15:43.600 "uuid": "652ca579-d25f-47d2-9bb8-a66747c9c805", 00:15:43.600 "strip_size_kb": 64, 00:15:43.600 "state": "online", 00:15:43.600 "raid_level": "concat", 00:15:43.600 "superblock": true, 00:15:43.600 "num_base_bdevs": 4, 00:15:43.600 "num_base_bdevs_discovered": 4, 00:15:43.600 "num_base_bdevs_operational": 4, 00:15:43.600 "base_bdevs_list": [ 00:15:43.600 { 00:15:43.600 "name": "pt1", 00:15:43.600 "uuid": "23aebb9e-c4d9-5cd5-8dd2-ceb6cf3339c6", 00:15:43.600 "is_configured": true, 00:15:43.600 "data_offset": 2048, 00:15:43.600 "data_size": 63488 00:15:43.600 }, 00:15:43.600 { 00:15:43.600 "name": "pt2", 00:15:43.600 "uuid": "7063a836-c7e9-58f9-98a7-c39bd4d8967b", 00:15:43.600 "is_configured": true, 00:15:43.600 "data_offset": 2048, 00:15:43.600 "data_size": 63488 00:15:43.600 }, 00:15:43.600 { 00:15:43.600 "name": "pt3", 00:15:43.600 "uuid": "a71a4e09-0415-52ba-a7cc-1a3870b77459", 00:15:43.600 "is_configured": true, 00:15:43.600 "data_offset": 2048, 00:15:43.600 "data_size": 63488 00:15:43.600 }, 00:15:43.600 { 00:15:43.600 "name": "pt4", 00:15:43.600 "uuid": "cbe3d522-b759-5123-81a9-849a562f203d", 00:15:43.600 "is_configured": true, 00:15:43.600 "data_offset": 2048, 00:15:43.600 "data_size": 63488 00:15:43.600 } 00:15:43.600 ] 00:15:43.600 }' 00:15:43.600 20:42:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.600 20:42:26 -- common/autotest_common.sh@10 -- # set +x 00:15:44.170 20:42:27 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:44.170 20:42:27 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:44.170 [2024-04-15 20:42:27.568893] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.170 20:42:27 -- bdev/bdev_raid.sh@430 -- # '[' 652ca579-d25f-47d2-9bb8-a66747c9c805 '!=' 652ca579-d25f-47d2-9bb8-a66747c9c805 ']' 00:15:44.170 20:42:27 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:44.170 20:42:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:44.170 20:42:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:44.170 20:42:27 -- bdev/bdev_raid.sh@511 -- # killprocess 54508 00:15:44.170 20:42:27 -- common/autotest_common.sh@926 -- # '[' -z 54508 ']' 00:15:44.170 20:42:27 -- common/autotest_common.sh@930 -- # kill -0 54508 00:15:44.170 20:42:27 -- common/autotest_common.sh@931 -- # uname 00:15:44.170 20:42:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:44.170 20:42:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54508 00:15:44.170 20:42:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:44.170 20:42:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:44.170 killing process with pid 54508 00:15:44.170 20:42:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54508' 00:15:44.170 20:42:27 -- common/autotest_common.sh@945 -- # kill 54508 00:15:44.170 [2024-04-15 20:42:27.609987] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:44.170 20:42:27 -- common/autotest_common.sh@950 -- # wait 54508 00:15:44.170 [2024-04-15 20:42:27.610043] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.170 [2024-04-15 20:42:27.610080] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.170 [2024-04-15 20:42:27.610088] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000031280 name raid_bdev1, state offline 00:15:44.739 [2024-04-15 20:42:27.956505] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:46.116 ************************************ 00:15:46.116 END TEST raid_superblock_test 00:15:46.116 ************************************ 00:15:46.116 20:42:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:46.116 00:15:46.117 real 0m10.130s 00:15:46.117 user 0m16.615s 00:15:46.117 sys 0m1.197s 00:15:46.117 20:42:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.117 20:42:29 -- common/autotest_common.sh@10 -- # set +x 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:46.117 20:42:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:46.117 20:42:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:46.117 20:42:29 -- common/autotest_common.sh@10 -- # set +x 00:15:46.117 ************************************ 00:15:46.117 START TEST raid_state_function_test 00:15:46.117 ************************************ 00:15:46.117 20:42:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:46.117 Process raid pid: 54814 00:15:46.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=54814 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 54814' 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 54814 /var/tmp/spdk-raid.sock 00:15:46.117 20:42:29 -- common/autotest_common.sh@819 -- # '[' -z 54814 ']' 00:15:46.117 20:42:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:46.117 20:42:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:46.117 20:42:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:46.117 20:42:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:46.117 20:42:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:46.117 20:42:29 -- common/autotest_common.sh@10 -- # set +x 00:15:46.117 [2024-04-15 20:42:29.426334] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:46.117 [2024-04-15 20:42:29.426484] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.117 [2024-04-15 20:42:29.583680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.375 [2024-04-15 20:42:29.778650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.634 [2024-04-15 20:42:29.978496] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.571 20:42:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:47.571 20:42:30 -- common/autotest_common.sh@852 -- # return 0 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:47.571 [2024-04-15 20:42:30.927247] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:47.571 [2024-04-15 20:42:30.927318] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:47.571 [2024-04-15 20:42:30.927330] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.571 [2024-04-15 20:42:30.927346] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.571 [2024-04-15 20:42:30.927353] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:47.571 [2024-04-15 20:42:30.927393] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:47.571 [2024-04-15 20:42:30.927400] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:47.571 [2024-04-15 20:42:30.927420] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.571 20:42:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.830 20:42:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.831 "name": "Existed_Raid", 00:15:47.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.831 "strip_size_kb": 0, 00:15:47.831 "state": "configuring", 00:15:47.831 "raid_level": "raid1", 00:15:47.831 "superblock": false, 00:15:47.831 "num_base_bdevs": 4, 00:15:47.831 "num_base_bdevs_discovered": 0, 00:15:47.831 "num_base_bdevs_operational": 4, 00:15:47.831 "base_bdevs_list": [ 00:15:47.831 { 00:15:47.831 "name": "BaseBdev1", 00:15:47.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.831 "is_configured": false, 00:15:47.831 "data_offset": 0, 00:15:47.831 "data_size": 0 00:15:47.831 }, 00:15:47.831 { 00:15:47.831 "name": "BaseBdev2", 00:15:47.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.831 "is_configured": false, 00:15:47.831 "data_offset": 0, 00:15:47.831 "data_size": 0 00:15:47.831 }, 00:15:47.831 { 00:15:47.831 "name": "BaseBdev3", 00:15:47.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.831 "is_configured": false, 00:15:47.831 "data_offset": 0, 00:15:47.831 "data_size": 0 00:15:47.831 }, 00:15:47.831 { 00:15:47.831 "name": "BaseBdev4", 00:15:47.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.831 "is_configured": false, 00:15:47.831 "data_offset": 0, 00:15:47.831 "data_size": 0 00:15:47.831 } 00:15:47.831 ] 00:15:47.831 }' 00:15:47.831 20:42:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.831 20:42:31 -- common/autotest_common.sh@10 -- # set +x 00:15:48.090 20:42:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:48.349 [2024-04-15 20:42:31.726062] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.349 [2024-04-15 20:42:31.726137] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:15:48.349 20:42:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:48.608 [2024-04-15 20:42:31.881834] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.608 [2024-04-15 20:42:31.881916] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.608 [2024-04-15 20:42:31.881928] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:48.608 [2024-04-15 20:42:31.881961] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:48.608 [2024-04-15 20:42:31.881968] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:48.608 [2024-04-15 20:42:31.881994] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:48.608 [2024-04-15 20:42:31.882001] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:48.608 [2024-04-15 20:42:31.882023] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:48.608 20:42:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:48.608 [2024-04-15 20:42:32.106463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.608 BaseBdev1 00:15:48.867 20:42:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:48.867 20:42:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:48.867 20:42:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:48.867 20:42:32 -- common/autotest_common.sh@889 -- # local i 00:15:48.867 20:42:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:48.867 20:42:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:48.867 20:42:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.867 20:42:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.171 [ 00:15:49.171 { 00:15:49.171 "name": "BaseBdev1", 00:15:49.171 "aliases": [ 00:15:49.171 "6ec8d7e6-8fd1-46aa-b8fc-ca39853f1580" 00:15:49.171 ], 00:15:49.171 "product_name": "Malloc disk", 00:15:49.171 "block_size": 512, 00:15:49.171 "num_blocks": 65536, 00:15:49.171 "uuid": "6ec8d7e6-8fd1-46aa-b8fc-ca39853f1580", 00:15:49.171 "assigned_rate_limits": { 00:15:49.171 "rw_ios_per_sec": 0, 00:15:49.171 "rw_mbytes_per_sec": 0, 00:15:49.171 "r_mbytes_per_sec": 0, 00:15:49.171 "w_mbytes_per_sec": 0 00:15:49.171 }, 00:15:49.171 "claimed": true, 00:15:49.171 "claim_type": "exclusive_write", 00:15:49.171 "zoned": false, 00:15:49.171 "supported_io_types": { 00:15:49.171 "read": true, 00:15:49.171 "write": true, 00:15:49.171 "unmap": true, 00:15:49.171 "write_zeroes": true, 00:15:49.171 "flush": true, 00:15:49.171 "reset": true, 00:15:49.171 "compare": false, 00:15:49.171 "compare_and_write": false, 00:15:49.171 "abort": true, 00:15:49.171 "nvme_admin": false, 00:15:49.171 "nvme_io": false 00:15:49.171 }, 00:15:49.171 "memory_domains": [ 00:15:49.171 { 00:15:49.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.171 "dma_device_type": 2 00:15:49.171 } 00:15:49.171 ], 00:15:49.171 "driver_specific": {} 00:15:49.171 } 00:15:49.171 ] 00:15:49.171 20:42:32 -- common/autotest_common.sh@895 -- # return 0 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.171 "name": "Existed_Raid", 00:15:49.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.171 "strip_size_kb": 0, 00:15:49.171 "state": "configuring", 00:15:49.171 "raid_level": "raid1", 00:15:49.171 "superblock": false, 00:15:49.171 "num_base_bdevs": 4, 00:15:49.171 "num_base_bdevs_discovered": 1, 00:15:49.171 "num_base_bdevs_operational": 4, 00:15:49.171 "base_bdevs_list": [ 00:15:49.171 { 00:15:49.171 "name": "BaseBdev1", 00:15:49.171 "uuid": "6ec8d7e6-8fd1-46aa-b8fc-ca39853f1580", 00:15:49.171 "is_configured": true, 00:15:49.171 "data_offset": 0, 00:15:49.171 "data_size": 65536 00:15:49.171 }, 00:15:49.171 { 00:15:49.171 "name": "BaseBdev2", 00:15:49.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.171 "is_configured": false, 00:15:49.171 "data_offset": 0, 00:15:49.171 "data_size": 0 00:15:49.171 }, 00:15:49.171 { 00:15:49.171 "name": "BaseBdev3", 00:15:49.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.171 "is_configured": false, 00:15:49.171 "data_offset": 0, 00:15:49.171 "data_size": 0 00:15:49.171 }, 00:15:49.171 { 00:15:49.171 "name": "BaseBdev4", 00:15:49.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.171 "is_configured": false, 00:15:49.171 "data_offset": 0, 00:15:49.171 "data_size": 0 00:15:49.171 } 00:15:49.171 ] 00:15:49.171 }' 00:15:49.171 20:42:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.171 20:42:32 -- common/autotest_common.sh@10 -- # set +x 00:15:49.752 20:42:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:50.012 [2024-04-15 20:42:33.348722] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.012 [2024-04-15 20:42:33.348796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:50.012 [2024-04-15 20:42:33.496516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.012 [2024-04-15 20:42:33.498342] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.012 [2024-04-15 20:42:33.498428] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.012 [2024-04-15 20:42:33.498449] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.012 [2024-04-15 20:42:33.498475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.012 [2024-04-15 20:42:33.498483] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:50.012 [2024-04-15 20:42:33.498502] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.012 20:42:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.272 20:42:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.272 20:42:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.272 20:42:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:50.272 "name": "Existed_Raid", 00:15:50.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.272 "strip_size_kb": 0, 00:15:50.272 "state": "configuring", 00:15:50.272 "raid_level": "raid1", 00:15:50.272 "superblock": false, 00:15:50.272 "num_base_bdevs": 4, 00:15:50.272 "num_base_bdevs_discovered": 1, 00:15:50.272 "num_base_bdevs_operational": 4, 00:15:50.272 "base_bdevs_list": [ 00:15:50.272 { 00:15:50.272 "name": "BaseBdev1", 00:15:50.272 "uuid": "6ec8d7e6-8fd1-46aa-b8fc-ca39853f1580", 00:15:50.272 "is_configured": true, 00:15:50.272 "data_offset": 0, 00:15:50.272 "data_size": 65536 00:15:50.272 }, 00:15:50.272 { 00:15:50.272 "name": "BaseBdev2", 00:15:50.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.272 "is_configured": false, 00:15:50.272 "data_offset": 0, 00:15:50.272 "data_size": 0 00:15:50.272 }, 00:15:50.272 { 00:15:50.272 "name": "BaseBdev3", 00:15:50.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.272 "is_configured": false, 00:15:50.272 "data_offset": 0, 00:15:50.272 "data_size": 0 00:15:50.272 }, 00:15:50.272 { 00:15:50.272 "name": "BaseBdev4", 00:15:50.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.272 "is_configured": false, 00:15:50.272 "data_offset": 0, 00:15:50.272 "data_size": 0 00:15:50.272 } 00:15:50.272 ] 00:15:50.272 }' 00:15:50.272 20:42:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:50.272 20:42:33 -- common/autotest_common.sh@10 -- # set +x 00:15:50.841 20:42:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.100 [2024-04-15 20:42:34.426494] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.100 BaseBdev2 00:15:51.100 20:42:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:51.100 20:42:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:51.100 20:42:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:51.100 20:42:34 -- common/autotest_common.sh@889 -- # local i 00:15:51.100 20:42:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:51.100 20:42:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:51.100 20:42:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:51.100 20:42:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.359 [ 00:15:51.359 { 00:15:51.359 "name": "BaseBdev2", 00:15:51.359 "aliases": [ 00:15:51.359 "3eaf5edd-1e32-47bb-9108-9f664171b248" 00:15:51.359 ], 00:15:51.359 "product_name": "Malloc disk", 00:15:51.359 "block_size": 512, 00:15:51.359 "num_blocks": 65536, 00:15:51.359 "uuid": "3eaf5edd-1e32-47bb-9108-9f664171b248", 00:15:51.359 "assigned_rate_limits": { 00:15:51.359 "rw_ios_per_sec": 0, 00:15:51.359 "rw_mbytes_per_sec": 0, 00:15:51.359 "r_mbytes_per_sec": 0, 00:15:51.359 "w_mbytes_per_sec": 0 00:15:51.359 }, 00:15:51.359 "claimed": true, 00:15:51.359 "claim_type": "exclusive_write", 00:15:51.359 "zoned": false, 00:15:51.359 "supported_io_types": { 00:15:51.359 "read": true, 00:15:51.359 "write": true, 00:15:51.359 "unmap": true, 00:15:51.359 "write_zeroes": true, 00:15:51.359 "flush": true, 00:15:51.359 "reset": true, 00:15:51.359 "compare": false, 00:15:51.359 "compare_and_write": false, 00:15:51.359 "abort": true, 00:15:51.359 "nvme_admin": false, 00:15:51.359 "nvme_io": false 00:15:51.359 }, 00:15:51.359 "memory_domains": [ 00:15:51.359 { 00:15:51.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.359 "dma_device_type": 2 00:15:51.359 } 00:15:51.359 ], 00:15:51.359 "driver_specific": {} 00:15:51.359 } 00:15:51.359 ] 00:15:51.359 20:42:34 -- common/autotest_common.sh@895 -- # return 0 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.359 20:42:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.619 20:42:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.619 "name": "Existed_Raid", 00:15:51.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.619 "strip_size_kb": 0, 00:15:51.619 "state": "configuring", 00:15:51.619 "raid_level": "raid1", 00:15:51.619 "superblock": false, 00:15:51.619 "num_base_bdevs": 4, 00:15:51.619 "num_base_bdevs_discovered": 2, 00:15:51.619 "num_base_bdevs_operational": 4, 00:15:51.619 "base_bdevs_list": [ 00:15:51.619 { 00:15:51.619 "name": "BaseBdev1", 00:15:51.619 "uuid": "6ec8d7e6-8fd1-46aa-b8fc-ca39853f1580", 00:15:51.619 "is_configured": true, 00:15:51.619 "data_offset": 0, 00:15:51.619 "data_size": 65536 00:15:51.619 }, 00:15:51.619 { 00:15:51.619 "name": "BaseBdev2", 00:15:51.619 "uuid": "3eaf5edd-1e32-47bb-9108-9f664171b248", 00:15:51.619 "is_configured": true, 00:15:51.619 "data_offset": 0, 00:15:51.619 "data_size": 65536 00:15:51.619 }, 00:15:51.619 { 00:15:51.619 "name": "BaseBdev3", 00:15:51.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.619 "is_configured": false, 00:15:51.619 "data_offset": 0, 00:15:51.619 "data_size": 0 00:15:51.619 }, 00:15:51.619 { 00:15:51.619 "name": "BaseBdev4", 00:15:51.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.619 "is_configured": false, 00:15:51.619 "data_offset": 0, 00:15:51.619 "data_size": 0 00:15:51.619 } 00:15:51.619 ] 00:15:51.619 }' 00:15:51.619 20:42:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.619 20:42:34 -- common/autotest_common.sh@10 -- # set +x 00:15:52.187 20:42:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:52.446 BaseBdev3 00:15:52.446 [2024-04-15 20:42:35.694975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:52.446 20:42:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:52.446 20:42:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:52.446 20:42:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:52.446 20:42:35 -- common/autotest_common.sh@889 -- # local i 00:15:52.446 20:42:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:52.446 20:42:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:52.446 20:42:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:52.446 20:42:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:52.705 [ 00:15:52.705 { 00:15:52.705 "name": "BaseBdev3", 00:15:52.705 "aliases": [ 00:15:52.705 "89e571ee-55e3-4e8a-854e-60a438bd1435" 00:15:52.705 ], 00:15:52.705 "product_name": "Malloc disk", 00:15:52.705 "block_size": 512, 00:15:52.705 "num_blocks": 65536, 00:15:52.705 "uuid": "89e571ee-55e3-4e8a-854e-60a438bd1435", 00:15:52.705 "assigned_rate_limits": { 00:15:52.705 "rw_ios_per_sec": 0, 00:15:52.705 "rw_mbytes_per_sec": 0, 00:15:52.705 "r_mbytes_per_sec": 0, 00:15:52.705 "w_mbytes_per_sec": 0 00:15:52.705 }, 00:15:52.705 "claimed": true, 00:15:52.705 "claim_type": "exclusive_write", 00:15:52.705 "zoned": false, 00:15:52.705 "supported_io_types": { 00:15:52.705 "read": true, 00:15:52.705 "write": true, 00:15:52.705 "unmap": true, 00:15:52.705 "write_zeroes": true, 00:15:52.705 "flush": true, 00:15:52.705 "reset": true, 00:15:52.705 "compare": false, 00:15:52.705 "compare_and_write": false, 00:15:52.705 "abort": true, 00:15:52.705 "nvme_admin": false, 00:15:52.705 "nvme_io": false 00:15:52.705 }, 00:15:52.705 "memory_domains": [ 00:15:52.705 { 00:15:52.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.705 "dma_device_type": 2 00:15:52.705 } 00:15:52.705 ], 00:15:52.705 "driver_specific": {} 00:15:52.705 } 00:15:52.705 ] 00:15:52.705 20:42:35 -- common/autotest_common.sh@895 -- # return 0 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.705 20:42:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.705 20:42:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.705 "name": "Existed_Raid", 00:15:52.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.705 "strip_size_kb": 0, 00:15:52.705 "state": "configuring", 00:15:52.705 "raid_level": "raid1", 00:15:52.705 "superblock": false, 00:15:52.705 "num_base_bdevs": 4, 00:15:52.705 "num_base_bdevs_discovered": 3, 00:15:52.705 "num_base_bdevs_operational": 4, 00:15:52.705 "base_bdevs_list": [ 00:15:52.705 { 00:15:52.705 "name": "BaseBdev1", 00:15:52.705 "uuid": "6ec8d7e6-8fd1-46aa-b8fc-ca39853f1580", 00:15:52.705 "is_configured": true, 00:15:52.705 "data_offset": 0, 00:15:52.705 "data_size": 65536 00:15:52.705 }, 00:15:52.705 { 00:15:52.705 "name": "BaseBdev2", 00:15:52.705 "uuid": "3eaf5edd-1e32-47bb-9108-9f664171b248", 00:15:52.705 "is_configured": true, 00:15:52.705 "data_offset": 0, 00:15:52.705 "data_size": 65536 00:15:52.705 }, 00:15:52.705 { 00:15:52.706 "name": "BaseBdev3", 00:15:52.706 "uuid": "89e571ee-55e3-4e8a-854e-60a438bd1435", 00:15:52.706 "is_configured": true, 00:15:52.706 "data_offset": 0, 00:15:52.706 "data_size": 65536 00:15:52.706 }, 00:15:52.706 { 00:15:52.706 "name": "BaseBdev4", 00:15:52.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.706 "is_configured": false, 00:15:52.706 "data_offset": 0, 00:15:52.706 "data_size": 0 00:15:52.706 } 00:15:52.706 ] 00:15:52.706 }' 00:15:52.706 20:42:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.706 20:42:36 -- common/autotest_common.sh@10 -- # set +x 00:15:53.274 20:42:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:53.534 [2024-04-15 20:42:36.923847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:53.534 [2024-04-15 20:42:36.923896] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000028b80 00:15:53.534 [2024-04-15 20:42:36.923906] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:53.534 [2024-04-15 20:42:36.924020] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:53.534 [2024-04-15 20:42:36.924216] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000028b80 00:15:53.534 [2024-04-15 20:42:36.924228] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000028b80 00:15:53.534 [2024-04-15 20:42:36.924405] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.534 BaseBdev4 00:15:53.534 20:42:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:15:53.534 20:42:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:15:53.534 20:42:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:53.534 20:42:36 -- common/autotest_common.sh@889 -- # local i 00:15:53.534 20:42:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:53.534 20:42:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:53.534 20:42:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.793 20:42:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:53.793 [ 00:15:53.793 { 00:15:53.793 "name": "BaseBdev4", 00:15:53.793 "aliases": [ 00:15:53.793 "c8a52678-4e7c-4ce6-a076-1476cd7fd07f" 00:15:53.793 ], 00:15:53.793 "product_name": "Malloc disk", 00:15:53.793 "block_size": 512, 00:15:53.793 "num_blocks": 65536, 00:15:53.793 "uuid": "c8a52678-4e7c-4ce6-a076-1476cd7fd07f", 00:15:53.793 "assigned_rate_limits": { 00:15:53.793 "rw_ios_per_sec": 0, 00:15:53.794 "rw_mbytes_per_sec": 0, 00:15:53.794 "r_mbytes_per_sec": 0, 00:15:53.794 "w_mbytes_per_sec": 0 00:15:53.794 }, 00:15:53.794 "claimed": true, 00:15:53.794 "claim_type": "exclusive_write", 00:15:53.794 "zoned": false, 00:15:53.794 "supported_io_types": { 00:15:53.794 "read": true, 00:15:53.794 "write": true, 00:15:53.794 "unmap": true, 00:15:53.794 "write_zeroes": true, 00:15:53.794 "flush": true, 00:15:53.794 "reset": true, 00:15:53.794 "compare": false, 00:15:53.794 "compare_and_write": false, 00:15:53.794 "abort": true, 00:15:53.794 "nvme_admin": false, 00:15:53.794 "nvme_io": false 00:15:53.794 }, 00:15:53.794 "memory_domains": [ 00:15:53.794 { 00:15:53.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.794 "dma_device_type": 2 00:15:53.794 } 00:15:53.794 ], 00:15:53.794 "driver_specific": {} 00:15:53.794 } 00:15:53.794 ] 00:15:53.794 20:42:37 -- common/autotest_common.sh@895 -- # return 0 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.794 20:42:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.053 20:42:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.053 "name": "Existed_Raid", 00:15:54.053 "uuid": "3e1f38c5-5f57-44c6-9f29-c7be5a68b05a", 00:15:54.053 "strip_size_kb": 0, 00:15:54.053 "state": "online", 00:15:54.053 "raid_level": "raid1", 00:15:54.053 "superblock": false, 00:15:54.053 "num_base_bdevs": 4, 00:15:54.053 "num_base_bdevs_discovered": 4, 00:15:54.053 "num_base_bdevs_operational": 4, 00:15:54.053 "base_bdevs_list": [ 00:15:54.053 { 00:15:54.053 "name": "BaseBdev1", 00:15:54.053 "uuid": "6ec8d7e6-8fd1-46aa-b8fc-ca39853f1580", 00:15:54.053 "is_configured": true, 00:15:54.053 "data_offset": 0, 00:15:54.053 "data_size": 65536 00:15:54.053 }, 00:15:54.053 { 00:15:54.053 "name": "BaseBdev2", 00:15:54.053 "uuid": "3eaf5edd-1e32-47bb-9108-9f664171b248", 00:15:54.053 "is_configured": true, 00:15:54.053 "data_offset": 0, 00:15:54.053 "data_size": 65536 00:15:54.053 }, 00:15:54.053 { 00:15:54.053 "name": "BaseBdev3", 00:15:54.053 "uuid": "89e571ee-55e3-4e8a-854e-60a438bd1435", 00:15:54.053 "is_configured": true, 00:15:54.053 "data_offset": 0, 00:15:54.053 "data_size": 65536 00:15:54.053 }, 00:15:54.053 { 00:15:54.053 "name": "BaseBdev4", 00:15:54.053 "uuid": "c8a52678-4e7c-4ce6-a076-1476cd7fd07f", 00:15:54.053 "is_configured": true, 00:15:54.053 "data_offset": 0, 00:15:54.053 "data_size": 65536 00:15:54.053 } 00:15:54.053 ] 00:15:54.053 }' 00:15:54.053 20:42:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.053 20:42:37 -- common/autotest_common.sh@10 -- # set +x 00:15:54.622 20:42:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:54.622 [2024-04-15 20:42:38.090197] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.881 "name": "Existed_Raid", 00:15:54.881 "uuid": "3e1f38c5-5f57-44c6-9f29-c7be5a68b05a", 00:15:54.881 "strip_size_kb": 0, 00:15:54.881 "state": "online", 00:15:54.881 "raid_level": "raid1", 00:15:54.881 "superblock": false, 00:15:54.881 "num_base_bdevs": 4, 00:15:54.881 "num_base_bdevs_discovered": 3, 00:15:54.881 "num_base_bdevs_operational": 3, 00:15:54.881 "base_bdevs_list": [ 00:15:54.881 { 00:15:54.881 "name": null, 00:15:54.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.881 "is_configured": false, 00:15:54.881 "data_offset": 0, 00:15:54.881 "data_size": 65536 00:15:54.881 }, 00:15:54.881 { 00:15:54.881 "name": "BaseBdev2", 00:15:54.881 "uuid": "3eaf5edd-1e32-47bb-9108-9f664171b248", 00:15:54.881 "is_configured": true, 00:15:54.881 "data_offset": 0, 00:15:54.881 "data_size": 65536 00:15:54.881 }, 00:15:54.881 { 00:15:54.881 "name": "BaseBdev3", 00:15:54.881 "uuid": "89e571ee-55e3-4e8a-854e-60a438bd1435", 00:15:54.881 "is_configured": true, 00:15:54.881 "data_offset": 0, 00:15:54.881 "data_size": 65536 00:15:54.881 }, 00:15:54.881 { 00:15:54.881 "name": "BaseBdev4", 00:15:54.881 "uuid": "c8a52678-4e7c-4ce6-a076-1476cd7fd07f", 00:15:54.881 "is_configured": true, 00:15:54.881 "data_offset": 0, 00:15:54.881 "data_size": 65536 00:15:54.881 } 00:15:54.881 ] 00:15:54.881 }' 00:15:54.881 20:42:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.882 20:42:38 -- common/autotest_common.sh@10 -- # set +x 00:15:55.449 20:42:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:55.449 20:42:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:55.449 20:42:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:55.449 20:42:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.707 20:42:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:55.707 20:42:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.707 20:42:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:55.707 [2024-04-15 20:42:39.141656] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:55.966 20:42:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:55.966 20:42:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:55.966 20:42:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:55.966 20:42:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.966 20:42:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:55.966 20:42:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.966 20:42:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:56.225 [2024-04-15 20:42:39.572957] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:56.225 20:42:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:56.225 20:42:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:56.225 20:42:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.225 20:42:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:56.484 20:42:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:56.484 20:42:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.484 20:42:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:56.849 [2024-04-15 20:42:40.007729] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:56.849 [2024-04-15 20:42:40.007756] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.849 [2024-04-15 20:42:40.007791] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.849 [2024-04-15 20:42:40.094036] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.849 [2024-04-15 20:42:40.094070] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000028b80 name Existed_Raid, state offline 00:15:56.849 20:42:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:56.849 20:42:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:56.849 20:42:40 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.849 20:42:40 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:57.107 20:42:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:57.107 20:42:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:57.107 20:42:40 -- bdev/bdev_raid.sh@287 -- # killprocess 54814 00:15:57.107 20:42:40 -- common/autotest_common.sh@926 -- # '[' -z 54814 ']' 00:15:57.107 20:42:40 -- common/autotest_common.sh@930 -- # kill -0 54814 00:15:57.107 20:42:40 -- common/autotest_common.sh@931 -- # uname 00:15:57.107 20:42:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:57.107 20:42:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 54814 00:15:57.107 killing process with pid 54814 00:15:57.108 20:42:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:57.108 20:42:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:57.108 20:42:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 54814' 00:15:57.108 20:42:40 -- common/autotest_common.sh@945 -- # kill 54814 00:15:57.108 20:42:40 -- common/autotest_common.sh@950 -- # wait 54814 00:15:57.108 [2024-04-15 20:42:40.310504] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.108 [2024-04-15 20:42:40.310598] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:58.488 00:15:58.488 real 0m12.299s 00:15:58.488 user 0m20.848s 00:15:58.488 sys 0m1.537s 00:15:58.488 20:42:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.488 20:42:41 -- common/autotest_common.sh@10 -- # set +x 00:15:58.488 ************************************ 00:15:58.488 END TEST raid_state_function_test 00:15:58.488 ************************************ 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:58.488 20:42:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:58.488 20:42:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:58.488 20:42:41 -- common/autotest_common.sh@10 -- # set +x 00:15:58.488 ************************************ 00:15:58.488 START TEST raid_state_function_test_sb 00:15:58.488 ************************************ 00:15:58.488 20:42:41 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:58.488 20:42:41 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:58.489 Process raid pid: 55246 00:15:58.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:58.489 20:42:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=55246 00:15:58.489 20:42:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 55246' 00:15:58.489 20:42:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 55246 /var/tmp/spdk-raid.sock 00:15:58.489 20:42:41 -- common/autotest_common.sh@819 -- # '[' -z 55246 ']' 00:15:58.489 20:42:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:58.489 20:42:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:58.489 20:42:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:58.489 20:42:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:58.489 20:42:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:58.489 20:42:41 -- common/autotest_common.sh@10 -- # set +x 00:15:58.489 [2024-04-15 20:42:41.795943] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:58.489 [2024-04-15 20:42:41.796079] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.489 [2024-04-15 20:42:41.949101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.748 [2024-04-15 20:42:42.145319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.006 [2024-04-15 20:42:42.335841] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.943 20:42:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:59.943 20:42:43 -- common/autotest_common.sh@852 -- # return 0 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:59.943 [2024-04-15 20:42:43.322417] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.943 [2024-04-15 20:42:43.322482] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.943 [2024-04-15 20:42:43.322492] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.943 [2024-04-15 20:42:43.322509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.943 [2024-04-15 20:42:43.322516] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:59.943 [2024-04-15 20:42:43.322554] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:59.943 [2024-04-15 20:42:43.322562] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:59.943 [2024-04-15 20:42:43.322585] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.943 20:42:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.202 20:42:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.202 "name": "Existed_Raid", 00:16:00.202 "uuid": "06c584e7-56ef-42f8-b7f6-5adf43c2e2b4", 00:16:00.202 "strip_size_kb": 0, 00:16:00.202 "state": "configuring", 00:16:00.202 "raid_level": "raid1", 00:16:00.202 "superblock": true, 00:16:00.202 "num_base_bdevs": 4, 00:16:00.202 "num_base_bdevs_discovered": 0, 00:16:00.202 "num_base_bdevs_operational": 4, 00:16:00.202 "base_bdevs_list": [ 00:16:00.202 { 00:16:00.202 "name": "BaseBdev1", 00:16:00.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.202 "is_configured": false, 00:16:00.202 "data_offset": 0, 00:16:00.202 "data_size": 0 00:16:00.202 }, 00:16:00.202 { 00:16:00.202 "name": "BaseBdev2", 00:16:00.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.202 "is_configured": false, 00:16:00.202 "data_offset": 0, 00:16:00.202 "data_size": 0 00:16:00.202 }, 00:16:00.202 { 00:16:00.202 "name": "BaseBdev3", 00:16:00.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.202 "is_configured": false, 00:16:00.202 "data_offset": 0, 00:16:00.202 "data_size": 0 00:16:00.202 }, 00:16:00.202 { 00:16:00.202 "name": "BaseBdev4", 00:16:00.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.202 "is_configured": false, 00:16:00.202 "data_offset": 0, 00:16:00.202 "data_size": 0 00:16:00.202 } 00:16:00.202 ] 00:16:00.202 }' 00:16:00.202 20:42:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.202 20:42:43 -- common/autotest_common.sh@10 -- # set +x 00:16:00.771 20:42:43 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:00.771 [2024-04-15 20:42:44.137202] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.771 [2024-04-15 20:42:44.137241] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000026780 name Existed_Raid, state configuring 00:16:00.771 20:42:44 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:01.031 [2024-04-15 20:42:44.312985] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.031 [2024-04-15 20:42:44.313039] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.031 [2024-04-15 20:42:44.313049] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.031 [2024-04-15 20:42:44.313087] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.031 [2024-04-15 20:42:44.313095] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.031 [2024-04-15 20:42:44.313115] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.031 [2024-04-15 20:42:44.313121] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:01.031 [2024-04-15 20:42:44.313142] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:01.031 20:42:44 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:01.031 [2024-04-15 20:42:44.522127] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.031 BaseBdev1 00:16:01.290 20:42:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:01.290 20:42:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:01.290 20:42:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:01.290 20:42:44 -- common/autotest_common.sh@889 -- # local i 00:16:01.290 20:42:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:01.290 20:42:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:01.290 20:42:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.290 20:42:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:01.605 [ 00:16:01.605 { 00:16:01.605 "name": "BaseBdev1", 00:16:01.605 "aliases": [ 00:16:01.605 "65a2372c-9be0-4326-8451-667c7dc1bd4e" 00:16:01.605 ], 00:16:01.605 "product_name": "Malloc disk", 00:16:01.605 "block_size": 512, 00:16:01.605 "num_blocks": 65536, 00:16:01.605 "uuid": "65a2372c-9be0-4326-8451-667c7dc1bd4e", 00:16:01.605 "assigned_rate_limits": { 00:16:01.605 "rw_ios_per_sec": 0, 00:16:01.605 "rw_mbytes_per_sec": 0, 00:16:01.605 "r_mbytes_per_sec": 0, 00:16:01.605 "w_mbytes_per_sec": 0 00:16:01.605 }, 00:16:01.605 "claimed": true, 00:16:01.605 "claim_type": "exclusive_write", 00:16:01.605 "zoned": false, 00:16:01.605 "supported_io_types": { 00:16:01.605 "read": true, 00:16:01.605 "write": true, 00:16:01.605 "unmap": true, 00:16:01.605 "write_zeroes": true, 00:16:01.605 "flush": true, 00:16:01.605 "reset": true, 00:16:01.605 "compare": false, 00:16:01.605 "compare_and_write": false, 00:16:01.605 "abort": true, 00:16:01.605 "nvme_admin": false, 00:16:01.605 "nvme_io": false 00:16:01.605 }, 00:16:01.605 "memory_domains": [ 00:16:01.605 { 00:16:01.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.605 "dma_device_type": 2 00:16:01.605 } 00:16:01.605 ], 00:16:01.605 "driver_specific": {} 00:16:01.605 } 00:16:01.605 ] 00:16:01.605 20:42:44 -- common/autotest_common.sh@895 -- # return 0 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.605 20:42:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.864 20:42:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.864 "name": "Existed_Raid", 00:16:01.864 "uuid": "21c73602-a046-4e4c-9078-64911484012c", 00:16:01.864 "strip_size_kb": 0, 00:16:01.864 "state": "configuring", 00:16:01.864 "raid_level": "raid1", 00:16:01.864 "superblock": true, 00:16:01.864 "num_base_bdevs": 4, 00:16:01.864 "num_base_bdevs_discovered": 1, 00:16:01.864 "num_base_bdevs_operational": 4, 00:16:01.864 "base_bdevs_list": [ 00:16:01.864 { 00:16:01.864 "name": "BaseBdev1", 00:16:01.864 "uuid": "65a2372c-9be0-4326-8451-667c7dc1bd4e", 00:16:01.864 "is_configured": true, 00:16:01.864 "data_offset": 2048, 00:16:01.864 "data_size": 63488 00:16:01.864 }, 00:16:01.864 { 00:16:01.864 "name": "BaseBdev2", 00:16:01.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.864 "is_configured": false, 00:16:01.864 "data_offset": 0, 00:16:01.864 "data_size": 0 00:16:01.864 }, 00:16:01.864 { 00:16:01.864 "name": "BaseBdev3", 00:16:01.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.864 "is_configured": false, 00:16:01.864 "data_offset": 0, 00:16:01.864 "data_size": 0 00:16:01.865 }, 00:16:01.865 { 00:16:01.865 "name": "BaseBdev4", 00:16:01.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.865 "is_configured": false, 00:16:01.865 "data_offset": 0, 00:16:01.865 "data_size": 0 00:16:01.865 } 00:16:01.865 ] 00:16:01.865 }' 00:16:01.865 20:42:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.865 20:42:45 -- common/autotest_common.sh@10 -- # set +x 00:16:02.433 20:42:45 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:02.433 [2024-04-15 20:42:45.832162] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.433 [2024-04-15 20:42:45.832205] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000027680 name Existed_Raid, state configuring 00:16:02.433 20:42:45 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:02.433 20:42:45 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:02.692 20:42:46 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:02.951 BaseBdev1 00:16:02.951 20:42:46 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:02.951 20:42:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:02.951 20:42:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:02.951 20:42:46 -- common/autotest_common.sh@889 -- # local i 00:16:02.951 20:42:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:02.951 20:42:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:02.951 20:42:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.951 20:42:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:03.210 [ 00:16:03.210 { 00:16:03.210 "name": "BaseBdev1", 00:16:03.210 "aliases": [ 00:16:03.210 "a8f8c5a0-f1c1-4e33-ba69-74e16d099875" 00:16:03.210 ], 00:16:03.210 "product_name": "Malloc disk", 00:16:03.210 "block_size": 512, 00:16:03.210 "num_blocks": 65536, 00:16:03.210 "uuid": "a8f8c5a0-f1c1-4e33-ba69-74e16d099875", 00:16:03.210 "assigned_rate_limits": { 00:16:03.210 "rw_ios_per_sec": 0, 00:16:03.210 "rw_mbytes_per_sec": 0, 00:16:03.210 "r_mbytes_per_sec": 0, 00:16:03.210 "w_mbytes_per_sec": 0 00:16:03.210 }, 00:16:03.210 "claimed": false, 00:16:03.210 "zoned": false, 00:16:03.210 "supported_io_types": { 00:16:03.210 "read": true, 00:16:03.210 "write": true, 00:16:03.210 "unmap": true, 00:16:03.210 "write_zeroes": true, 00:16:03.210 "flush": true, 00:16:03.210 "reset": true, 00:16:03.210 "compare": false, 00:16:03.210 "compare_and_write": false, 00:16:03.210 "abort": true, 00:16:03.210 "nvme_admin": false, 00:16:03.210 "nvme_io": false 00:16:03.210 }, 00:16:03.210 "memory_domains": [ 00:16:03.210 { 00:16:03.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.210 "dma_device_type": 2 00:16:03.210 } 00:16:03.210 ], 00:16:03.210 "driver_specific": {} 00:16:03.210 } 00:16:03.210 ] 00:16:03.210 20:42:46 -- common/autotest_common.sh@895 -- # return 0 00:16:03.210 20:42:46 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:03.210 [2024-04-15 20:42:46.691638] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.210 [2024-04-15 20:42:46.692837] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.210 [2024-04-15 20:42:46.692899] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.210 [2024-04-15 20:42:46.692909] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.210 [2024-04-15 20:42:46.692929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.210 [2024-04-15 20:42:46.692937] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.210 [2024-04-15 20:42:46.692951] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:03.210 20:42:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:03.210 20:42:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:03.210 20:42:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:03.210 20:42:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.210 20:42:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.211 20:42:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:03.211 20:42:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:03.211 20:42:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:03.211 20:42:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.211 20:42:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.211 20:42:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.211 20:42:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.211 20:42:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.211 20:42:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.470 20:42:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.470 "name": "Existed_Raid", 00:16:03.470 "uuid": "24411359-cb47-4e6d-852d-7903c70572d2", 00:16:03.470 "strip_size_kb": 0, 00:16:03.470 "state": "configuring", 00:16:03.470 "raid_level": "raid1", 00:16:03.470 "superblock": true, 00:16:03.470 "num_base_bdevs": 4, 00:16:03.470 "num_base_bdevs_discovered": 1, 00:16:03.470 "num_base_bdevs_operational": 4, 00:16:03.470 "base_bdevs_list": [ 00:16:03.470 { 00:16:03.470 "name": "BaseBdev1", 00:16:03.470 "uuid": "a8f8c5a0-f1c1-4e33-ba69-74e16d099875", 00:16:03.470 "is_configured": true, 00:16:03.470 "data_offset": 2048, 00:16:03.470 "data_size": 63488 00:16:03.470 }, 00:16:03.470 { 00:16:03.470 "name": "BaseBdev2", 00:16:03.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.470 "is_configured": false, 00:16:03.470 "data_offset": 0, 00:16:03.470 "data_size": 0 00:16:03.470 }, 00:16:03.470 { 00:16:03.470 "name": "BaseBdev3", 00:16:03.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.470 "is_configured": false, 00:16:03.470 "data_offset": 0, 00:16:03.470 "data_size": 0 00:16:03.470 }, 00:16:03.470 { 00:16:03.470 "name": "BaseBdev4", 00:16:03.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.470 "is_configured": false, 00:16:03.470 "data_offset": 0, 00:16:03.470 "data_size": 0 00:16:03.470 } 00:16:03.470 ] 00:16:03.470 }' 00:16:03.470 20:42:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.470 20:42:46 -- common/autotest_common.sh@10 -- # set +x 00:16:04.037 20:42:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:04.037 BaseBdev2 00:16:04.037 [2024-04-15 20:42:47.502089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.037 20:42:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:04.037 20:42:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:04.037 20:42:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:04.037 20:42:47 -- common/autotest_common.sh@889 -- # local i 00:16:04.037 20:42:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:04.037 20:42:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:04.037 20:42:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.296 20:42:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:04.556 [ 00:16:04.556 { 00:16:04.556 "name": "BaseBdev2", 00:16:04.556 "aliases": [ 00:16:04.556 "783dc613-56dd-419d-9bbc-d6c99638ad51" 00:16:04.556 ], 00:16:04.556 "product_name": "Malloc disk", 00:16:04.556 "block_size": 512, 00:16:04.556 "num_blocks": 65536, 00:16:04.556 "uuid": "783dc613-56dd-419d-9bbc-d6c99638ad51", 00:16:04.556 "assigned_rate_limits": { 00:16:04.556 "rw_ios_per_sec": 0, 00:16:04.556 "rw_mbytes_per_sec": 0, 00:16:04.556 "r_mbytes_per_sec": 0, 00:16:04.556 "w_mbytes_per_sec": 0 00:16:04.556 }, 00:16:04.556 "claimed": true, 00:16:04.556 "claim_type": "exclusive_write", 00:16:04.556 "zoned": false, 00:16:04.556 "supported_io_types": { 00:16:04.556 "read": true, 00:16:04.556 "write": true, 00:16:04.556 "unmap": true, 00:16:04.556 "write_zeroes": true, 00:16:04.556 "flush": true, 00:16:04.556 "reset": true, 00:16:04.556 "compare": false, 00:16:04.556 "compare_and_write": false, 00:16:04.556 "abort": true, 00:16:04.556 "nvme_admin": false, 00:16:04.556 "nvme_io": false 00:16:04.556 }, 00:16:04.556 "memory_domains": [ 00:16:04.556 { 00:16:04.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.556 "dma_device_type": 2 00:16:04.556 } 00:16:04.556 ], 00:16:04.556 "driver_specific": {} 00:16:04.556 } 00:16:04.556 ] 00:16:04.556 20:42:47 -- common/autotest_common.sh@895 -- # return 0 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.556 20:42:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.556 20:42:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.556 "name": "Existed_Raid", 00:16:04.556 "uuid": "24411359-cb47-4e6d-852d-7903c70572d2", 00:16:04.556 "strip_size_kb": 0, 00:16:04.556 "state": "configuring", 00:16:04.556 "raid_level": "raid1", 00:16:04.556 "superblock": true, 00:16:04.556 "num_base_bdevs": 4, 00:16:04.556 "num_base_bdevs_discovered": 2, 00:16:04.556 "num_base_bdevs_operational": 4, 00:16:04.556 "base_bdevs_list": [ 00:16:04.556 { 00:16:04.556 "name": "BaseBdev1", 00:16:04.556 "uuid": "a8f8c5a0-f1c1-4e33-ba69-74e16d099875", 00:16:04.556 "is_configured": true, 00:16:04.556 "data_offset": 2048, 00:16:04.556 "data_size": 63488 00:16:04.556 }, 00:16:04.556 { 00:16:04.556 "name": "BaseBdev2", 00:16:04.556 "uuid": "783dc613-56dd-419d-9bbc-d6c99638ad51", 00:16:04.556 "is_configured": true, 00:16:04.556 "data_offset": 2048, 00:16:04.556 "data_size": 63488 00:16:04.556 }, 00:16:04.556 { 00:16:04.556 "name": "BaseBdev3", 00:16:04.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.556 "is_configured": false, 00:16:04.556 "data_offset": 0, 00:16:04.556 "data_size": 0 00:16:04.556 }, 00:16:04.556 { 00:16:04.556 "name": "BaseBdev4", 00:16:04.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.556 "is_configured": false, 00:16:04.556 "data_offset": 0, 00:16:04.556 "data_size": 0 00:16:04.556 } 00:16:04.556 ] 00:16:04.556 }' 00:16:04.556 20:42:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.556 20:42:48 -- common/autotest_common.sh@10 -- # set +x 00:16:05.125 20:42:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.384 BaseBdev3 00:16:05.384 [2024-04-15 20:42:48.700993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.384 20:42:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:05.384 20:42:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:05.384 20:42:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:05.384 20:42:48 -- common/autotest_common.sh@889 -- # local i 00:16:05.384 20:42:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:05.384 20:42:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:05.384 20:42:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.384 20:42:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.644 [ 00:16:05.644 { 00:16:05.644 "name": "BaseBdev3", 00:16:05.644 "aliases": [ 00:16:05.644 "3491b682-bc3d-4897-badb-d577c1e9df58" 00:16:05.644 ], 00:16:05.644 "product_name": "Malloc disk", 00:16:05.644 "block_size": 512, 00:16:05.644 "num_blocks": 65536, 00:16:05.644 "uuid": "3491b682-bc3d-4897-badb-d577c1e9df58", 00:16:05.644 "assigned_rate_limits": { 00:16:05.644 "rw_ios_per_sec": 0, 00:16:05.644 "rw_mbytes_per_sec": 0, 00:16:05.644 "r_mbytes_per_sec": 0, 00:16:05.644 "w_mbytes_per_sec": 0 00:16:05.644 }, 00:16:05.644 "claimed": true, 00:16:05.644 "claim_type": "exclusive_write", 00:16:05.644 "zoned": false, 00:16:05.644 "supported_io_types": { 00:16:05.644 "read": true, 00:16:05.644 "write": true, 00:16:05.644 "unmap": true, 00:16:05.644 "write_zeroes": true, 00:16:05.644 "flush": true, 00:16:05.644 "reset": true, 00:16:05.644 "compare": false, 00:16:05.644 "compare_and_write": false, 00:16:05.644 "abort": true, 00:16:05.644 "nvme_admin": false, 00:16:05.644 "nvme_io": false 00:16:05.644 }, 00:16:05.644 "memory_domains": [ 00:16:05.644 { 00:16:05.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.644 "dma_device_type": 2 00:16:05.644 } 00:16:05.644 ], 00:16:05.644 "driver_specific": {} 00:16:05.644 } 00:16:05.644 ] 00:16:05.644 20:42:49 -- common/autotest_common.sh@895 -- # return 0 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.644 20:42:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.903 20:42:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.903 "name": "Existed_Raid", 00:16:05.903 "uuid": "24411359-cb47-4e6d-852d-7903c70572d2", 00:16:05.903 "strip_size_kb": 0, 00:16:05.903 "state": "configuring", 00:16:05.903 "raid_level": "raid1", 00:16:05.903 "superblock": true, 00:16:05.903 "num_base_bdevs": 4, 00:16:05.903 "num_base_bdevs_discovered": 3, 00:16:05.903 "num_base_bdevs_operational": 4, 00:16:05.903 "base_bdevs_list": [ 00:16:05.903 { 00:16:05.903 "name": "BaseBdev1", 00:16:05.903 "uuid": "a8f8c5a0-f1c1-4e33-ba69-74e16d099875", 00:16:05.903 "is_configured": true, 00:16:05.903 "data_offset": 2048, 00:16:05.903 "data_size": 63488 00:16:05.903 }, 00:16:05.903 { 00:16:05.903 "name": "BaseBdev2", 00:16:05.903 "uuid": "783dc613-56dd-419d-9bbc-d6c99638ad51", 00:16:05.903 "is_configured": true, 00:16:05.903 "data_offset": 2048, 00:16:05.903 "data_size": 63488 00:16:05.903 }, 00:16:05.903 { 00:16:05.903 "name": "BaseBdev3", 00:16:05.903 "uuid": "3491b682-bc3d-4897-badb-d577c1e9df58", 00:16:05.903 "is_configured": true, 00:16:05.903 "data_offset": 2048, 00:16:05.903 "data_size": 63488 00:16:05.903 }, 00:16:05.903 { 00:16:05.903 "name": "BaseBdev4", 00:16:05.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.903 "is_configured": false, 00:16:05.903 "data_offset": 0, 00:16:05.903 "data_size": 0 00:16:05.903 } 00:16:05.903 ] 00:16:05.903 }' 00:16:05.903 20:42:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.903 20:42:49 -- common/autotest_common.sh@10 -- # set +x 00:16:06.471 20:42:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:06.471 [2024-04-15 20:42:49.923619] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.472 [2024-04-15 20:42:49.923767] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000029180 00:16:06.472 [2024-04-15 20:42:49.923779] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:06.472 [2024-04-15 20:42:49.923885] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:06.472 [2024-04-15 20:42:49.924066] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000029180 00:16:06.472 [2024-04-15 20:42:49.924076] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000029180 00:16:06.472 BaseBdev4 00:16:06.472 [2024-04-15 20:42:49.924190] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.472 20:42:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:16:06.472 20:42:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:16:06.472 20:42:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:06.472 20:42:49 -- common/autotest_common.sh@889 -- # local i 00:16:06.472 20:42:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:06.472 20:42:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:06.472 20:42:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.730 20:42:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:06.989 [ 00:16:06.989 { 00:16:06.989 "name": "BaseBdev4", 00:16:06.989 "aliases": [ 00:16:06.989 "d9992aa4-6877-4fcf-bd3b-4fe28be0f2b3" 00:16:06.989 ], 00:16:06.989 "product_name": "Malloc disk", 00:16:06.989 "block_size": 512, 00:16:06.989 "num_blocks": 65536, 00:16:06.989 "uuid": "d9992aa4-6877-4fcf-bd3b-4fe28be0f2b3", 00:16:06.989 "assigned_rate_limits": { 00:16:06.989 "rw_ios_per_sec": 0, 00:16:06.989 "rw_mbytes_per_sec": 0, 00:16:06.989 "r_mbytes_per_sec": 0, 00:16:06.989 "w_mbytes_per_sec": 0 00:16:06.989 }, 00:16:06.989 "claimed": true, 00:16:06.989 "claim_type": "exclusive_write", 00:16:06.989 "zoned": false, 00:16:06.989 "supported_io_types": { 00:16:06.989 "read": true, 00:16:06.989 "write": true, 00:16:06.989 "unmap": true, 00:16:06.989 "write_zeroes": true, 00:16:06.989 "flush": true, 00:16:06.989 "reset": true, 00:16:06.989 "compare": false, 00:16:06.989 "compare_and_write": false, 00:16:06.989 "abort": true, 00:16:06.989 "nvme_admin": false, 00:16:06.989 "nvme_io": false 00:16:06.989 }, 00:16:06.989 "memory_domains": [ 00:16:06.989 { 00:16:06.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.989 "dma_device_type": 2 00:16:06.989 } 00:16:06.989 ], 00:16:06.989 "driver_specific": {} 00:16:06.989 } 00:16:06.989 ] 00:16:06.989 20:42:50 -- common/autotest_common.sh@895 -- # return 0 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.989 "name": "Existed_Raid", 00:16:06.989 "uuid": "24411359-cb47-4e6d-852d-7903c70572d2", 00:16:06.989 "strip_size_kb": 0, 00:16:06.989 "state": "online", 00:16:06.989 "raid_level": "raid1", 00:16:06.989 "superblock": true, 00:16:06.989 "num_base_bdevs": 4, 00:16:06.989 "num_base_bdevs_discovered": 4, 00:16:06.989 "num_base_bdevs_operational": 4, 00:16:06.989 "base_bdevs_list": [ 00:16:06.989 { 00:16:06.989 "name": "BaseBdev1", 00:16:06.989 "uuid": "a8f8c5a0-f1c1-4e33-ba69-74e16d099875", 00:16:06.989 "is_configured": true, 00:16:06.989 "data_offset": 2048, 00:16:06.989 "data_size": 63488 00:16:06.989 }, 00:16:06.989 { 00:16:06.989 "name": "BaseBdev2", 00:16:06.989 "uuid": "783dc613-56dd-419d-9bbc-d6c99638ad51", 00:16:06.989 "is_configured": true, 00:16:06.989 "data_offset": 2048, 00:16:06.989 "data_size": 63488 00:16:06.989 }, 00:16:06.989 { 00:16:06.989 "name": "BaseBdev3", 00:16:06.989 "uuid": "3491b682-bc3d-4897-badb-d577c1e9df58", 00:16:06.989 "is_configured": true, 00:16:06.989 "data_offset": 2048, 00:16:06.989 "data_size": 63488 00:16:06.989 }, 00:16:06.989 { 00:16:06.989 "name": "BaseBdev4", 00:16:06.989 "uuid": "d9992aa4-6877-4fcf-bd3b-4fe28be0f2b3", 00:16:06.989 "is_configured": true, 00:16:06.989 "data_offset": 2048, 00:16:06.989 "data_size": 63488 00:16:06.989 } 00:16:06.989 ] 00:16:06.989 }' 00:16:06.989 20:42:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.989 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:16:07.557 20:42:50 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:07.816 [2024-04-15 20:42:51.149829] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:07.816 20:42:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.817 20:42:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.075 20:42:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.075 "name": "Existed_Raid", 00:16:08.075 "uuid": "24411359-cb47-4e6d-852d-7903c70572d2", 00:16:08.075 "strip_size_kb": 0, 00:16:08.075 "state": "online", 00:16:08.075 "raid_level": "raid1", 00:16:08.075 "superblock": true, 00:16:08.075 "num_base_bdevs": 4, 00:16:08.075 "num_base_bdevs_discovered": 3, 00:16:08.075 "num_base_bdevs_operational": 3, 00:16:08.075 "base_bdevs_list": [ 00:16:08.075 { 00:16:08.075 "name": null, 00:16:08.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.075 "is_configured": false, 00:16:08.075 "data_offset": 2048, 00:16:08.075 "data_size": 63488 00:16:08.075 }, 00:16:08.075 { 00:16:08.075 "name": "BaseBdev2", 00:16:08.075 "uuid": "783dc613-56dd-419d-9bbc-d6c99638ad51", 00:16:08.075 "is_configured": true, 00:16:08.075 "data_offset": 2048, 00:16:08.075 "data_size": 63488 00:16:08.075 }, 00:16:08.075 { 00:16:08.075 "name": "BaseBdev3", 00:16:08.075 "uuid": "3491b682-bc3d-4897-badb-d577c1e9df58", 00:16:08.075 "is_configured": true, 00:16:08.075 "data_offset": 2048, 00:16:08.075 "data_size": 63488 00:16:08.075 }, 00:16:08.075 { 00:16:08.075 "name": "BaseBdev4", 00:16:08.075 "uuid": "d9992aa4-6877-4fcf-bd3b-4fe28be0f2b3", 00:16:08.075 "is_configured": true, 00:16:08.075 "data_offset": 2048, 00:16:08.075 "data_size": 63488 00:16:08.075 } 00:16:08.075 ] 00:16:08.075 }' 00:16:08.075 20:42:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.075 20:42:51 -- common/autotest_common.sh@10 -- # set +x 00:16:08.643 20:42:51 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:08.643 20:42:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:08.643 20:42:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.643 20:42:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:08.643 20:42:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:08.643 20:42:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.643 20:42:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:08.901 [2024-04-15 20:42:52.231759] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.901 20:42:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:08.901 20:42:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:08.901 20:42:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:08.901 20:42:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.159 20:42:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:09.159 20:42:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.159 20:42:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:09.159 [2024-04-15 20:42:52.654519] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.417 20:42:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:09.417 20:42:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:09.417 20:42:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.417 20:42:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:09.417 20:42:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:09.417 20:42:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.417 20:42:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:09.676 [2024-04-15 20:42:53.078830] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:09.676 [2024-04-15 20:42:53.078859] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.676 [2024-04-15 20:42:53.078898] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.676 [2024-04-15 20:42:53.164186] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.676 [2024-04-15 20:42:53.164216] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000029180 name Existed_Raid, state offline 00:16:09.936 20:42:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:09.936 20:42:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:09.936 20:42:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:09.936 20:42:53 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.936 20:42:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:09.936 20:42:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:09.936 20:42:53 -- bdev/bdev_raid.sh@287 -- # killprocess 55246 00:16:09.936 20:42:53 -- common/autotest_common.sh@926 -- # '[' -z 55246 ']' 00:16:09.936 20:42:53 -- common/autotest_common.sh@930 -- # kill -0 55246 00:16:09.936 20:42:53 -- common/autotest_common.sh@931 -- # uname 00:16:09.936 20:42:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.936 20:42:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55246 00:16:09.936 killing process with pid 55246 00:16:09.936 20:42:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:09.936 20:42:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:09.936 20:42:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55246' 00:16:09.936 20:42:53 -- common/autotest_common.sh@945 -- # kill 55246 00:16:09.936 20:42:53 -- common/autotest_common.sh@950 -- # wait 55246 00:16:09.936 [2024-04-15 20:42:53.385158] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.936 [2024-04-15 20:42:53.385278] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:11.315 ************************************ 00:16:11.315 END TEST raid_state_function_test_sb 00:16:11.315 ************************************ 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:11.315 00:16:11.315 real 0m12.973s 00:16:11.315 user 0m22.055s 00:16:11.315 sys 0m1.570s 00:16:11.315 20:42:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:11.315 20:42:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:11.315 20:42:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:11.315 20:42:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:11.315 20:42:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.315 ************************************ 00:16:11.315 START TEST raid_superblock_test 00:16:11.315 ************************************ 00:16:11.315 20:42:54 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=55679 00:16:11.315 20:42:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 55679 /var/tmp/spdk-raid.sock 00:16:11.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:11.315 20:42:54 -- common/autotest_common.sh@819 -- # '[' -z 55679 ']' 00:16:11.315 20:42:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:11.315 20:42:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:11.315 20:42:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:11.315 20:42:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:11.315 20:42:54 -- common/autotest_common.sh@10 -- # set +x 00:16:11.582 [2024-04-15 20:42:54.827190] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:11.582 [2024-04-15 20:42:54.827340] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55679 ] 00:16:11.582 [2024-04-15 20:42:54.983893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.847 [2024-04-15 20:42:55.172977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.106 [2024-04-15 20:42:55.360491] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.106 20:42:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:12.106 20:42:55 -- common/autotest_common.sh@852 -- # return 0 00:16:12.106 20:42:55 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:12.106 20:42:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:12.106 20:42:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:12.106 20:42:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:12.106 20:42:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:12.106 20:42:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:12.106 20:42:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:12.106 20:42:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:12.106 20:42:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:12.365 malloc1 00:16:12.365 20:42:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:12.624 [2024-04-15 20:42:55.871096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:12.624 [2024-04-15 20:42:55.871187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.624 [2024-04-15 20:42:55.871233] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000027080 00:16:12.624 [2024-04-15 20:42:55.871290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.624 [2024-04-15 20:42:55.872817] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.624 [2024-04-15 20:42:55.872854] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:12.624 pt1 00:16:12.624 20:42:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:12.624 20:42:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:12.624 20:42:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:12.624 20:42:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:12.624 20:42:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:12.624 20:42:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:12.624 20:42:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:12.624 20:42:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:12.624 20:42:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:12.624 malloc2 00:16:12.624 20:42:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:12.882 [2024-04-15 20:42:56.196148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:12.882 [2024-04-15 20:42:56.196222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.882 [2024-04-15 20:42:56.196261] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000028e80 00:16:12.882 [2024-04-15 20:42:56.196295] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.882 [2024-04-15 20:42:56.197824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.882 [2024-04-15 20:42:56.197860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:12.882 pt2 00:16:12.882 20:42:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:12.882 20:42:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:12.882 20:42:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:12.882 20:42:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:12.882 20:42:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:12.882 20:42:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:12.882 20:42:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:12.882 20:42:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:12.882 20:42:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:12.882 malloc3 00:16:13.142 20:42:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:13.142 [2024-04-15 20:42:56.567733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:13.142 [2024-04-15 20:42:56.567813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.142 [2024-04-15 20:42:56.567856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ac80 00:16:13.142 [2024-04-15 20:42:56.567891] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.142 [2024-04-15 20:42:56.569457] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.142 [2024-04-15 20:42:56.569498] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:13.142 pt3 00:16:13.142 20:42:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:13.142 20:42:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:13.142 20:42:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:16:13.142 20:42:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:16:13.142 20:42:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:13.142 20:42:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:13.142 20:42:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:13.142 20:42:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:13.142 20:42:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:13.401 malloc4 00:16:13.401 20:42:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:13.660 [2024-04-15 20:42:56.904625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:13.660 [2024-04-15 20:42:56.904842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.660 [2024-04-15 20:42:56.904882] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002ca80 00:16:13.660 [2024-04-15 20:42:56.904933] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.660 [2024-04-15 20:42:56.906475] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.660 [2024-04-15 20:42:56.906513] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:13.660 pt4 00:16:13.660 20:42:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:13.660 20:42:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:13.660 20:42:56 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:13.660 [2024-04-15 20:42:57.104461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:13.660 [2024-04-15 20:42:57.105906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:13.660 [2024-04-15 20:42:57.105948] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:13.660 [2024-04-15 20:42:57.105972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:13.660 [2024-04-15 20:42:57.106077] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600002df80 00:16:13.660 [2024-04-15 20:42:57.106087] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:13.660 [2024-04-15 20:42:57.106212] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:13.660 [2024-04-15 20:42:57.106406] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600002df80 00:16:13.660 [2024-04-15 20:42:57.106417] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600002df80 00:16:13.660 [2024-04-15 20:42:57.106518] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.660 20:42:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.920 20:42:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.920 "name": "raid_bdev1", 00:16:13.920 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:13.920 "strip_size_kb": 0, 00:16:13.920 "state": "online", 00:16:13.920 "raid_level": "raid1", 00:16:13.920 "superblock": true, 00:16:13.920 "num_base_bdevs": 4, 00:16:13.920 "num_base_bdevs_discovered": 4, 00:16:13.920 "num_base_bdevs_operational": 4, 00:16:13.920 "base_bdevs_list": [ 00:16:13.920 { 00:16:13.920 "name": "pt1", 00:16:13.920 "uuid": "bcdcfa0f-9635-5d8b-b43a-5bdbc594f525", 00:16:13.920 "is_configured": true, 00:16:13.920 "data_offset": 2048, 00:16:13.920 "data_size": 63488 00:16:13.920 }, 00:16:13.920 { 00:16:13.920 "name": "pt2", 00:16:13.920 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:13.920 "is_configured": true, 00:16:13.920 "data_offset": 2048, 00:16:13.920 "data_size": 63488 00:16:13.920 }, 00:16:13.920 { 00:16:13.920 "name": "pt3", 00:16:13.920 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:13.920 "is_configured": true, 00:16:13.920 "data_offset": 2048, 00:16:13.920 "data_size": 63488 00:16:13.920 }, 00:16:13.920 { 00:16:13.920 "name": "pt4", 00:16:13.920 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:13.920 "is_configured": true, 00:16:13.920 "data_offset": 2048, 00:16:13.920 "data_size": 63488 00:16:13.920 } 00:16:13.920 ] 00:16:13.920 }' 00:16:13.920 20:42:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.920 20:42:57 -- common/autotest_common.sh@10 -- # set +x 00:16:14.489 20:42:57 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:14.489 20:42:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:14.489 [2024-04-15 20:42:57.943171] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.489 20:42:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=750de97e-738c-4a96-a8ef-f93459473e90 00:16:14.489 20:42:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 750de97e-738c-4a96-a8ef-f93459473e90 ']' 00:16:14.489 20:42:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:14.748 [2024-04-15 20:42:58.110802] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.748 [2024-04-15 20:42:58.110829] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.748 [2024-04-15 20:42:58.110886] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.748 [2024-04-15 20:42:58.110930] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.748 [2024-04-15 20:42:58.110939] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002df80 name raid_bdev1, state offline 00:16:14.748 20:42:58 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.748 20:42:58 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:15.007 20:42:58 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:15.007 20:42:58 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:15.007 20:42:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:15.007 20:42:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:15.007 20:42:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:15.007 20:42:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:15.265 20:42:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:15.265 20:42:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:15.523 20:42:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:15.523 20:42:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:15.523 20:42:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:15.523 20:42:58 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:15.781 20:42:59 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:15.781 20:42:59 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:15.781 20:42:59 -- common/autotest_common.sh@640 -- # local es=0 00:16:15.781 20:42:59 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:15.781 20:42:59 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.781 20:42:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:15.781 20:42:59 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.781 20:42:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:15.781 20:42:59 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.781 20:42:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:15.781 20:42:59 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:15.781 20:42:59 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:15.781 20:42:59 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:15.781 [2024-04-15 20:42:59.273072] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:15.781 [2024-04-15 20:42:59.274474] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:15.781 [2024-04-15 20:42:59.274525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:15.781 [2024-04-15 20:42:59.274551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:15.781 [2024-04-15 20:42:59.274587] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:15.781 [2024-04-15 20:42:59.274665] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:15.781 [2024-04-15 20:42:59.274702] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:15.781 [2024-04-15 20:42:59.274752] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:16:15.781 [2024-04-15 20:42:59.274775] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.781 [2024-04-15 20:42:59.274787] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600002e580 name raid_bdev1, state configuring 00:16:15.781 request: 00:16:15.781 { 00:16:15.781 "name": "raid_bdev1", 00:16:15.781 "raid_level": "raid1", 00:16:15.781 "base_bdevs": [ 00:16:15.781 "malloc1", 00:16:15.781 "malloc2", 00:16:15.781 "malloc3", 00:16:15.781 "malloc4" 00:16:15.781 ], 00:16:15.781 "superblock": false, 00:16:15.781 "method": "bdev_raid_create", 00:16:15.781 "req_id": 1 00:16:15.781 } 00:16:15.781 Got JSON-RPC error response 00:16:15.781 response: 00:16:15.781 { 00:16:15.781 "code": -17, 00:16:15.781 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:15.781 } 00:16:16.071 20:42:59 -- common/autotest_common.sh@643 -- # es=1 00:16:16.071 20:42:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:16.071 20:42:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:16.071 20:42:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:16.071 20:42:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.071 20:42:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:16.071 20:42:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:16.071 20:42:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:16.071 20:42:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:16.351 [2024-04-15 20:42:59.648459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:16.351 [2024-04-15 20:42:59.648537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.351 [2024-04-15 20:42:59.648587] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600002fa80 00:16:16.351 [2024-04-15 20:42:59.648611] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.351 [2024-04-15 20:42:59.650295] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.351 [2024-04-15 20:42:59.650348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:16.351 [2024-04-15 20:42:59.650445] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:16.351 [2024-04-15 20:42:59.650495] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.351 pt1 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.351 "name": "raid_bdev1", 00:16:16.351 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:16.351 "strip_size_kb": 0, 00:16:16.351 "state": "configuring", 00:16:16.351 "raid_level": "raid1", 00:16:16.351 "superblock": true, 00:16:16.351 "num_base_bdevs": 4, 00:16:16.351 "num_base_bdevs_discovered": 1, 00:16:16.351 "num_base_bdevs_operational": 4, 00:16:16.351 "base_bdevs_list": [ 00:16:16.351 { 00:16:16.351 "name": "pt1", 00:16:16.351 "uuid": "bcdcfa0f-9635-5d8b-b43a-5bdbc594f525", 00:16:16.351 "is_configured": true, 00:16:16.351 "data_offset": 2048, 00:16:16.351 "data_size": 63488 00:16:16.351 }, 00:16:16.351 { 00:16:16.351 "name": null, 00:16:16.351 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:16.351 "is_configured": false, 00:16:16.351 "data_offset": 2048, 00:16:16.351 "data_size": 63488 00:16:16.351 }, 00:16:16.351 { 00:16:16.351 "name": null, 00:16:16.351 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:16.351 "is_configured": false, 00:16:16.351 "data_offset": 2048, 00:16:16.351 "data_size": 63488 00:16:16.351 }, 00:16:16.351 { 00:16:16.351 "name": null, 00:16:16.351 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:16.351 "is_configured": false, 00:16:16.351 "data_offset": 2048, 00:16:16.351 "data_size": 63488 00:16:16.351 } 00:16:16.351 ] 00:16:16.351 }' 00:16:16.351 20:42:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.351 20:42:59 -- common/autotest_common.sh@10 -- # set +x 00:16:16.925 20:43:00 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:16:16.925 20:43:00 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.184 [2024-04-15 20:43:00.539145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.184 [2024-04-15 20:43:00.539215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.184 [2024-04-15 20:43:00.539271] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000031880 00:16:17.184 [2024-04-15 20:43:00.539294] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.184 [2024-04-15 20:43:00.539587] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.184 [2024-04-15 20:43:00.539619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.184 [2024-04-15 20:43:00.539911] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:17.184 [2024-04-15 20:43:00.539968] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.184 pt2 00:16:17.184 20:43:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:17.442 [2024-04-15 20:43:00.694923] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.442 20:43:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:17.442 "name": "raid_bdev1", 00:16:17.442 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:17.442 "strip_size_kb": 0, 00:16:17.442 "state": "configuring", 00:16:17.442 "raid_level": "raid1", 00:16:17.442 "superblock": true, 00:16:17.442 "num_base_bdevs": 4, 00:16:17.442 "num_base_bdevs_discovered": 1, 00:16:17.442 "num_base_bdevs_operational": 4, 00:16:17.442 "base_bdevs_list": [ 00:16:17.442 { 00:16:17.442 "name": "pt1", 00:16:17.442 "uuid": "bcdcfa0f-9635-5d8b-b43a-5bdbc594f525", 00:16:17.442 "is_configured": true, 00:16:17.442 "data_offset": 2048, 00:16:17.442 "data_size": 63488 00:16:17.442 }, 00:16:17.442 { 00:16:17.442 "name": null, 00:16:17.442 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:17.443 "is_configured": false, 00:16:17.443 "data_offset": 2048, 00:16:17.443 "data_size": 63488 00:16:17.443 }, 00:16:17.443 { 00:16:17.443 "name": null, 00:16:17.443 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:17.443 "is_configured": false, 00:16:17.443 "data_offset": 2048, 00:16:17.443 "data_size": 63488 00:16:17.443 }, 00:16:17.443 { 00:16:17.443 "name": null, 00:16:17.443 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:17.443 "is_configured": false, 00:16:17.443 "data_offset": 2048, 00:16:17.443 "data_size": 63488 00:16:17.443 } 00:16:17.443 ] 00:16:17.443 }' 00:16:17.443 20:43:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:17.443 20:43:00 -- common/autotest_common.sh@10 -- # set +x 00:16:18.379 20:43:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:18.379 20:43:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:18.379 20:43:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:18.379 [2024-04-15 20:43:01.713364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:18.379 [2024-04-15 20:43:01.713437] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.379 [2024-04-15 20:43:01.713483] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000032d80 00:16:18.379 [2024-04-15 20:43:01.713502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.379 [2024-04-15 20:43:01.713983] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.379 [2024-04-15 20:43:01.714033] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:18.379 [2024-04-15 20:43:01.714103] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:18.379 [2024-04-15 20:43:01.714124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:18.379 pt2 00:16:18.379 20:43:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:18.379 20:43:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:18.379 20:43:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:18.637 [2024-04-15 20:43:01.885109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:18.637 [2024-04-15 20:43:01.885186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.637 [2024-04-15 20:43:01.885220] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000034280 00:16:18.637 [2024-04-15 20:43:01.885243] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.637 [2024-04-15 20:43:01.885523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.637 [2024-04-15 20:43:01.885561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:18.637 [2024-04-15 20:43:01.885630] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:18.637 [2024-04-15 20:43:01.885865] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:18.637 pt3 00:16:18.637 20:43:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:18.637 20:43:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:18.637 20:43:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:18.637 [2024-04-15 20:43:02.040849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:18.637 [2024-04-15 20:43:02.040927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.637 [2024-04-15 20:43:02.040961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000035780 00:16:18.637 [2024-04-15 20:43:02.040986] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.637 [2024-04-15 20:43:02.041290] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.637 [2024-04-15 20:43:02.041328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:18.637 [2024-04-15 20:43:02.041390] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:16:18.637 [2024-04-15 20:43:02.041408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:18.637 [2024-04-15 20:43:02.041473] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000031280 00:16:18.637 [2024-04-15 20:43:02.041481] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:18.637 [2024-04-15 20:43:02.041544] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:18.637 [2024-04-15 20:43:02.041919] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000031280 00:16:18.637 [2024-04-15 20:43:02.041938] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000031280 00:16:18.637 [2024-04-15 20:43:02.042038] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.637 pt4 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.637 20:43:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.895 20:43:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.895 "name": "raid_bdev1", 00:16:18.895 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:18.895 "strip_size_kb": 0, 00:16:18.895 "state": "online", 00:16:18.895 "raid_level": "raid1", 00:16:18.895 "superblock": true, 00:16:18.895 "num_base_bdevs": 4, 00:16:18.895 "num_base_bdevs_discovered": 4, 00:16:18.895 "num_base_bdevs_operational": 4, 00:16:18.895 "base_bdevs_list": [ 00:16:18.895 { 00:16:18.895 "name": "pt1", 00:16:18.895 "uuid": "bcdcfa0f-9635-5d8b-b43a-5bdbc594f525", 00:16:18.895 "is_configured": true, 00:16:18.895 "data_offset": 2048, 00:16:18.895 "data_size": 63488 00:16:18.895 }, 00:16:18.895 { 00:16:18.895 "name": "pt2", 00:16:18.895 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:18.895 "is_configured": true, 00:16:18.895 "data_offset": 2048, 00:16:18.895 "data_size": 63488 00:16:18.895 }, 00:16:18.895 { 00:16:18.895 "name": "pt3", 00:16:18.895 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:18.895 "is_configured": true, 00:16:18.895 "data_offset": 2048, 00:16:18.895 "data_size": 63488 00:16:18.895 }, 00:16:18.895 { 00:16:18.895 "name": "pt4", 00:16:18.895 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:18.895 "is_configured": true, 00:16:18.895 "data_offset": 2048, 00:16:18.895 "data_size": 63488 00:16:18.895 } 00:16:18.895 ] 00:16:18.895 }' 00:16:18.896 20:43:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.896 20:43:02 -- common/autotest_common.sh@10 -- # set +x 00:16:19.463 20:43:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:19.463 20:43:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:19.728 [2024-04-15 20:43:03.023501] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@430 -- # '[' 750de97e-738c-4a96-a8ef-f93459473e90 '!=' 750de97e-738c-4a96-a8ef-f93459473e90 ']' 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:19.728 [2024-04-15 20:43:03.199145] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.728 20:43:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.986 20:43:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.986 "name": "raid_bdev1", 00:16:19.986 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:19.986 "strip_size_kb": 0, 00:16:19.986 "state": "online", 00:16:19.986 "raid_level": "raid1", 00:16:19.986 "superblock": true, 00:16:19.986 "num_base_bdevs": 4, 00:16:19.986 "num_base_bdevs_discovered": 3, 00:16:19.986 "num_base_bdevs_operational": 3, 00:16:19.986 "base_bdevs_list": [ 00:16:19.986 { 00:16:19.986 "name": null, 00:16:19.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.986 "is_configured": false, 00:16:19.986 "data_offset": 2048, 00:16:19.986 "data_size": 63488 00:16:19.986 }, 00:16:19.986 { 00:16:19.986 "name": "pt2", 00:16:19.986 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:19.986 "is_configured": true, 00:16:19.986 "data_offset": 2048, 00:16:19.986 "data_size": 63488 00:16:19.986 }, 00:16:19.986 { 00:16:19.986 "name": "pt3", 00:16:19.986 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:19.986 "is_configured": true, 00:16:19.986 "data_offset": 2048, 00:16:19.986 "data_size": 63488 00:16:19.986 }, 00:16:19.986 { 00:16:19.986 "name": "pt4", 00:16:19.986 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:19.986 "is_configured": true, 00:16:19.986 "data_offset": 2048, 00:16:19.986 "data_size": 63488 00:16:19.986 } 00:16:19.986 ] 00:16:19.986 }' 00:16:19.986 20:43:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.986 20:43:03 -- common/autotest_common.sh@10 -- # set +x 00:16:20.552 20:43:03 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:20.811 [2024-04-15 20:43:04.157654] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.811 [2024-04-15 20:43:04.157689] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.811 [2024-04-15 20:43:04.157741] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.811 [2024-04-15 20:43:04.157783] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.811 [2024-04-15 20:43:04.157791] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000031280 name raid_bdev1, state offline 00:16:20.811 20:43:04 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.811 20:43:04 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:21.070 20:43:04 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:21.070 20:43:04 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:21.070 20:43:04 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:21.070 20:43:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:21.070 20:43:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:21.070 20:43:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:21.070 20:43:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:21.070 20:43:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:21.328 20:43:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:21.328 20:43:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:21.328 20:43:04 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:21.586 20:43:04 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:21.586 20:43:04 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:21.586 20:43:04 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:21.586 20:43:04 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:21.586 20:43:04 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:21.586 [2024-04-15 20:43:05.020418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:21.586 [2024-04-15 20:43:05.020498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.586 [2024-04-15 20:43:05.020538] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000036c80 00:16:21.586 [2024-04-15 20:43:05.020561] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.586 [2024-04-15 20:43:05.022232] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.586 [2024-04-15 20:43:05.022286] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:21.586 [2024-04-15 20:43:05.022380] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:21.586 [2024-04-15 20:43:05.022421] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.586 pt2 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.586 20:43:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.843 20:43:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.843 "name": "raid_bdev1", 00:16:21.843 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:21.843 "strip_size_kb": 0, 00:16:21.843 "state": "configuring", 00:16:21.843 "raid_level": "raid1", 00:16:21.843 "superblock": true, 00:16:21.843 "num_base_bdevs": 4, 00:16:21.843 "num_base_bdevs_discovered": 1, 00:16:21.843 "num_base_bdevs_operational": 3, 00:16:21.843 "base_bdevs_list": [ 00:16:21.843 { 00:16:21.843 "name": null, 00:16:21.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.843 "is_configured": false, 00:16:21.843 "data_offset": 2048, 00:16:21.843 "data_size": 63488 00:16:21.843 }, 00:16:21.843 { 00:16:21.843 "name": "pt2", 00:16:21.843 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:21.843 "is_configured": true, 00:16:21.843 "data_offset": 2048, 00:16:21.843 "data_size": 63488 00:16:21.843 }, 00:16:21.843 { 00:16:21.843 "name": null, 00:16:21.843 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:21.843 "is_configured": false, 00:16:21.843 "data_offset": 2048, 00:16:21.843 "data_size": 63488 00:16:21.843 }, 00:16:21.843 { 00:16:21.843 "name": null, 00:16:21.843 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:21.843 "is_configured": false, 00:16:21.843 "data_offset": 2048, 00:16:21.843 "data_size": 63488 00:16:21.843 } 00:16:21.843 ] 00:16:21.843 }' 00:16:21.843 20:43:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.843 20:43:05 -- common/autotest_common.sh@10 -- # set +x 00:16:22.410 20:43:05 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:16:22.410 20:43:05 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:22.410 20:43:05 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:22.669 [2024-04-15 20:43:06.034897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:22.669 [2024-04-15 20:43:06.034982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.669 [2024-04-15 20:43:06.035025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000038780 00:16:22.669 [2024-04-15 20:43:06.035053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.669 [2024-04-15 20:43:06.035382] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.669 [2024-04-15 20:43:06.035415] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:22.669 [2024-04-15 20:43:06.035496] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:22.669 [2024-04-15 20:43:06.035517] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:22.669 pt3 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.669 20:43:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.927 20:43:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.927 "name": "raid_bdev1", 00:16:22.927 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:22.927 "strip_size_kb": 0, 00:16:22.927 "state": "configuring", 00:16:22.927 "raid_level": "raid1", 00:16:22.927 "superblock": true, 00:16:22.927 "num_base_bdevs": 4, 00:16:22.927 "num_base_bdevs_discovered": 2, 00:16:22.927 "num_base_bdevs_operational": 3, 00:16:22.927 "base_bdevs_list": [ 00:16:22.927 { 00:16:22.927 "name": null, 00:16:22.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.927 "is_configured": false, 00:16:22.927 "data_offset": 2048, 00:16:22.927 "data_size": 63488 00:16:22.927 }, 00:16:22.927 { 00:16:22.927 "name": "pt2", 00:16:22.927 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:22.927 "is_configured": true, 00:16:22.927 "data_offset": 2048, 00:16:22.927 "data_size": 63488 00:16:22.927 }, 00:16:22.927 { 00:16:22.927 "name": "pt3", 00:16:22.927 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:22.927 "is_configured": true, 00:16:22.927 "data_offset": 2048, 00:16:22.927 "data_size": 63488 00:16:22.927 }, 00:16:22.927 { 00:16:22.927 "name": null, 00:16:22.927 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:22.927 "is_configured": false, 00:16:22.927 "data_offset": 2048, 00:16:22.927 "data_size": 63488 00:16:22.927 } 00:16:22.927 ] 00:16:22.927 }' 00:16:22.927 20:43:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.927 20:43:06 -- common/autotest_common.sh@10 -- # set +x 00:16:23.494 20:43:06 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:16:23.494 20:43:06 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:23.494 20:43:06 -- bdev/bdev_raid.sh@462 -- # i=3 00:16:23.494 20:43:06 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:23.753 [2024-04-15 20:43:06.997420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:23.753 [2024-04-15 20:43:06.997487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.753 [2024-04-15 20:43:06.997530] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000039c80 00:16:23.753 [2024-04-15 20:43:06.997550] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.753 [2024-04-15 20:43:06.997991] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.753 [2024-04-15 20:43:06.998025] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:23.753 [2024-04-15 20:43:06.998104] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:16:23.753 [2024-04-15 20:43:06.998125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:23.753 [2024-04-15 20:43:06.998195] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000038180 00:16:23.753 [2024-04-15 20:43:06.998203] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:23.753 [2024-04-15 20:43:06.998290] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:23.753 [2024-04-15 20:43:06.998437] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000038180 00:16:23.753 [2024-04-15 20:43:06.998446] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000038180 00:16:23.753 [2024-04-15 20:43:06.998519] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.753 pt4 00:16:23.753 20:43:07 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:23.753 20:43:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:23.753 20:43:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:23.753 20:43:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:23.753 20:43:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:23.753 20:43:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:23.753 20:43:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.753 20:43:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.753 20:43:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.754 20:43:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.754 20:43:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.754 20:43:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.754 20:43:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.754 "name": "raid_bdev1", 00:16:23.754 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:23.754 "strip_size_kb": 0, 00:16:23.754 "state": "online", 00:16:23.754 "raid_level": "raid1", 00:16:23.754 "superblock": true, 00:16:23.754 "num_base_bdevs": 4, 00:16:23.754 "num_base_bdevs_discovered": 3, 00:16:23.754 "num_base_bdevs_operational": 3, 00:16:23.754 "base_bdevs_list": [ 00:16:23.754 { 00:16:23.754 "name": null, 00:16:23.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.754 "is_configured": false, 00:16:23.754 "data_offset": 2048, 00:16:23.754 "data_size": 63488 00:16:23.754 }, 00:16:23.754 { 00:16:23.754 "name": "pt2", 00:16:23.754 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:23.754 "is_configured": true, 00:16:23.754 "data_offset": 2048, 00:16:23.754 "data_size": 63488 00:16:23.754 }, 00:16:23.754 { 00:16:23.754 "name": "pt3", 00:16:23.754 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:23.754 "is_configured": true, 00:16:23.754 "data_offset": 2048, 00:16:23.754 "data_size": 63488 00:16:23.754 }, 00:16:23.754 { 00:16:23.754 "name": "pt4", 00:16:23.754 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:23.754 "is_configured": true, 00:16:23.754 "data_offset": 2048, 00:16:23.754 "data_size": 63488 00:16:23.754 } 00:16:23.754 ] 00:16:23.754 }' 00:16:23.754 20:43:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.754 20:43:07 -- common/autotest_common.sh@10 -- # set +x 00:16:24.322 20:43:07 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:16:24.322 20:43:07 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:24.322 [2024-04-15 20:43:07.808285] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.322 [2024-04-15 20:43:07.808318] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.322 [2024-04-15 20:43:07.808373] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.322 [2024-04-15 20:43:07.808415] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.322 [2024-04-15 20:43:07.808423] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000038180 name raid_bdev1, state offline 00:16:24.581 20:43:07 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:16:24.581 20:43:07 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.581 20:43:07 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:16:24.581 20:43:07 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:16:24.581 20:43:07 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:24.840 [2024-04-15 20:43:08.127868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:24.840 [2024-04-15 20:43:08.127937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.840 [2024-04-15 20:43:08.127974] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003b180 00:16:24.840 [2024-04-15 20:43:08.127992] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.840 [2024-04-15 20:43:08.129200] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.840 [2024-04-15 20:43:08.129263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:24.840 [2024-04-15 20:43:08.129341] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:24.840 [2024-04-15 20:43:08.129380] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:24.840 pt1 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.840 20:43:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.840 "name": "raid_bdev1", 00:16:24.840 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:24.840 "strip_size_kb": 0, 00:16:24.840 "state": "configuring", 00:16:24.840 "raid_level": "raid1", 00:16:24.840 "superblock": true, 00:16:24.840 "num_base_bdevs": 4, 00:16:24.840 "num_base_bdevs_discovered": 1, 00:16:24.840 "num_base_bdevs_operational": 4, 00:16:24.840 "base_bdevs_list": [ 00:16:24.840 { 00:16:24.840 "name": "pt1", 00:16:24.840 "uuid": "bcdcfa0f-9635-5d8b-b43a-5bdbc594f525", 00:16:24.840 "is_configured": true, 00:16:24.840 "data_offset": 2048, 00:16:24.840 "data_size": 63488 00:16:24.840 }, 00:16:24.840 { 00:16:24.840 "name": null, 00:16:24.840 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:24.840 "is_configured": false, 00:16:24.840 "data_offset": 2048, 00:16:24.840 "data_size": 63488 00:16:24.840 }, 00:16:24.840 { 00:16:24.840 "name": null, 00:16:24.840 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:24.840 "is_configured": false, 00:16:24.840 "data_offset": 2048, 00:16:24.840 "data_size": 63488 00:16:24.840 }, 00:16:24.840 { 00:16:24.840 "name": null, 00:16:24.840 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:24.840 "is_configured": false, 00:16:24.840 "data_offset": 2048, 00:16:24.840 "data_size": 63488 00:16:24.840 } 00:16:24.841 ] 00:16:24.841 }' 00:16:24.841 20:43:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.841 20:43:08 -- common/autotest_common.sh@10 -- # set +x 00:16:25.408 20:43:08 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:16:25.408 20:43:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:25.408 20:43:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:25.667 20:43:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:16:25.667 20:43:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:25.667 20:43:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:25.667 20:43:09 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:16:25.667 20:43:09 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:25.667 20:43:09 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:25.925 20:43:09 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:16:25.925 20:43:09 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:16:25.925 20:43:09 -- bdev/bdev_raid.sh@489 -- # i=3 00:16:25.925 20:43:09 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:26.185 [2024-04-15 20:43:09.469825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:26.185 [2024-04-15 20:43:09.469906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.185 [2024-04-15 20:43:09.469947] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003cc80 00:16:26.185 [2024-04-15 20:43:09.469973] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.185 [2024-04-15 20:43:09.470294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.185 [2024-04-15 20:43:09.470332] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:26.185 [2024-04-15 20:43:09.470411] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:16:26.185 [2024-04-15 20:43:09.470423] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:26.185 [2024-04-15 20:43:09.470431] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.185 [2024-04-15 20:43:09.470448] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003c680 name raid_bdev1, state configuring 00:16:26.185 [2024-04-15 20:43:09.470532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:26.185 pt4 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.185 "name": "raid_bdev1", 00:16:26.185 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:26.185 "strip_size_kb": 0, 00:16:26.185 "state": "configuring", 00:16:26.185 "raid_level": "raid1", 00:16:26.185 "superblock": true, 00:16:26.185 "num_base_bdevs": 4, 00:16:26.185 "num_base_bdevs_discovered": 1, 00:16:26.185 "num_base_bdevs_operational": 3, 00:16:26.185 "base_bdevs_list": [ 00:16:26.185 { 00:16:26.185 "name": null, 00:16:26.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.185 "is_configured": false, 00:16:26.185 "data_offset": 2048, 00:16:26.185 "data_size": 63488 00:16:26.185 }, 00:16:26.185 { 00:16:26.185 "name": null, 00:16:26.185 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:26.185 "is_configured": false, 00:16:26.185 "data_offset": 2048, 00:16:26.185 "data_size": 63488 00:16:26.185 }, 00:16:26.185 { 00:16:26.185 "name": null, 00:16:26.185 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:26.185 "is_configured": false, 00:16:26.185 "data_offset": 2048, 00:16:26.185 "data_size": 63488 00:16:26.185 }, 00:16:26.185 { 00:16:26.185 "name": "pt4", 00:16:26.185 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:26.185 "is_configured": true, 00:16:26.185 "data_offset": 2048, 00:16:26.185 "data_size": 63488 00:16:26.185 } 00:16:26.185 ] 00:16:26.185 }' 00:16:26.185 20:43:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.185 20:43:09 -- common/autotest_common.sh@10 -- # set +x 00:16:26.752 20:43:10 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:16:26.752 20:43:10 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:16:26.752 20:43:10 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:27.010 [2024-04-15 20:43:10.380637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:27.010 [2024-04-15 20:43:10.380725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.010 [2024-04-15 20:43:10.380772] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003e480 00:16:27.010 [2024-04-15 20:43:10.380801] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.010 [2024-04-15 20:43:10.381087] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.010 [2024-04-15 20:43:10.381132] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:27.010 [2024-04-15 20:43:10.381213] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:27.010 [2024-04-15 20:43:10.381235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.010 pt2 00:16:27.010 20:43:10 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:16:27.010 20:43:10 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:16:27.010 20:43:10 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:27.269 [2024-04-15 20:43:10.528413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:27.269 [2024-04-15 20:43:10.528490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.269 [2024-04-15 20:43:10.528531] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:16:27.269 [2024-04-15 20:43:10.528573] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.269 [2024-04-15 20:43:10.529043] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.269 [2024-04-15 20:43:10.529090] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:27.269 [2024-04-15 20:43:10.529174] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:27.269 [2024-04-15 20:43:10.529202] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:27.269 [2024-04-15 20:43:10.529287] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003de80 00:16:27.269 [2024-04-15 20:43:10.529299] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:27.269 [2024-04-15 20:43:10.529381] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:16:27.269 [2024-04-15 20:43:10.529552] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003de80 00:16:27.269 [2024-04-15 20:43:10.529563] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600003de80 00:16:27.269 [2024-04-15 20:43:10.529675] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.269 pt3 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.269 "name": "raid_bdev1", 00:16:27.269 "uuid": "750de97e-738c-4a96-a8ef-f93459473e90", 00:16:27.269 "strip_size_kb": 0, 00:16:27.269 "state": "online", 00:16:27.269 "raid_level": "raid1", 00:16:27.269 "superblock": true, 00:16:27.269 "num_base_bdevs": 4, 00:16:27.269 "num_base_bdevs_discovered": 3, 00:16:27.269 "num_base_bdevs_operational": 3, 00:16:27.269 "base_bdevs_list": [ 00:16:27.269 { 00:16:27.269 "name": null, 00:16:27.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.269 "is_configured": false, 00:16:27.269 "data_offset": 2048, 00:16:27.269 "data_size": 63488 00:16:27.269 }, 00:16:27.269 { 00:16:27.269 "name": "pt2", 00:16:27.269 "uuid": "da4d64f8-0ae3-5f5f-9af0-32d6500e5a2a", 00:16:27.269 "is_configured": true, 00:16:27.269 "data_offset": 2048, 00:16:27.269 "data_size": 63488 00:16:27.269 }, 00:16:27.269 { 00:16:27.269 "name": "pt3", 00:16:27.269 "uuid": "e29841bb-c8ef-50fe-a209-37191f80b2a0", 00:16:27.269 "is_configured": true, 00:16:27.269 "data_offset": 2048, 00:16:27.269 "data_size": 63488 00:16:27.269 }, 00:16:27.269 { 00:16:27.269 "name": "pt4", 00:16:27.269 "uuid": "7d429054-59e6-54ef-8eb0-41b51f49c203", 00:16:27.269 "is_configured": true, 00:16:27.269 "data_offset": 2048, 00:16:27.269 "data_size": 63488 00:16:27.269 } 00:16:27.269 ] 00:16:27.269 }' 00:16:27.269 20:43:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.269 20:43:10 -- common/autotest_common.sh@10 -- # set +x 00:16:27.837 20:43:11 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:27.837 20:43:11 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:28.098 [2024-04-15 20:43:11.343355] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.098 20:43:11 -- bdev/bdev_raid.sh@506 -- # '[' 750de97e-738c-4a96-a8ef-f93459473e90 '!=' 750de97e-738c-4a96-a8ef-f93459473e90 ']' 00:16:28.098 20:43:11 -- bdev/bdev_raid.sh@511 -- # killprocess 55679 00:16:28.098 20:43:11 -- common/autotest_common.sh@926 -- # '[' -z 55679 ']' 00:16:28.098 20:43:11 -- common/autotest_common.sh@930 -- # kill -0 55679 00:16:28.098 20:43:11 -- common/autotest_common.sh@931 -- # uname 00:16:28.099 20:43:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.099 20:43:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 55679 00:16:28.099 20:43:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:28.099 killing process with pid 55679 00:16:28.099 20:43:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:28.099 20:43:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 55679' 00:16:28.099 20:43:11 -- common/autotest_common.sh@945 -- # kill 55679 00:16:28.099 [2024-04-15 20:43:11.393015] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.099 20:43:11 -- common/autotest_common.sh@950 -- # wait 55679 00:16:28.099 [2024-04-15 20:43:11.393078] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.099 [2024-04-15 20:43:11.393120] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.099 [2024-04-15 20:43:11.393129] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003de80 name raid_bdev1, state offline 00:16:28.357 [2024-04-15 20:43:11.749039] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.741 ************************************ 00:16:29.741 END TEST raid_superblock_test 00:16:29.741 ************************************ 00:16:29.741 20:43:13 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:29.741 00:16:29.741 real 0m18.347s 00:16:29.741 user 0m33.104s 00:16:29.741 sys 0m2.294s 00:16:29.741 20:43:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.741 20:43:13 -- common/autotest_common.sh@10 -- # set +x 00:16:29.741 20:43:13 -- bdev/bdev_raid.sh@733 -- # '[' '' = true ']' 00:16:29.741 20:43:13 -- bdev/bdev_raid.sh@742 -- # '[' n == y ']' 00:16:29.741 20:43:13 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:16:29.741 ************************************ 00:16:29.741 END TEST bdev_raid 00:16:29.741 ************************************ 00:16:29.741 00:16:29.741 real 5m9.273s 00:16:29.741 user 8m38.853s 00:16:29.741 sys 0m39.441s 00:16:29.741 20:43:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.741 20:43:13 -- common/autotest_common.sh@10 -- # set +x 00:16:29.741 20:43:13 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:16:29.741 20:43:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:29.741 20:43:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:29.741 20:43:13 -- common/autotest_common.sh@10 -- # set +x 00:16:29.741 ************************************ 00:16:29.741 START TEST bdevperf_config 00:16:29.741 ************************************ 00:16:29.741 20:43:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:16:30.001 * Looking for test storage... 00:16:30.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:16:30.001 20:43:13 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:16:30.001 20:43:13 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:16:30.001 20:43:13 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:16:30.001 20:43:13 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:30.001 20:43:13 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.001 20:43:13 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:16:30.001 20:43:13 -- bdevperf/common.sh@8 -- # local job_section=global 00:16:30.001 20:43:13 -- bdevperf/common.sh@9 -- # local rw=read 00:16:30.001 20:43:13 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:16:30.001 20:43:13 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:16:30.001 20:43:13 -- bdevperf/common.sh@13 -- # cat 00:16:30.001 00:16:30.001 20:43:13 -- bdevperf/common.sh@18 -- # job='[global]' 00:16:30.001 20:43:13 -- bdevperf/common.sh@19 -- # echo 00:16:30.001 20:43:13 -- bdevperf/common.sh@20 -- # cat 00:16:30.001 00:16:30.001 20:43:13 -- bdevperf/test_config.sh@18 -- # create_job job0 00:16:30.001 20:43:13 -- bdevperf/common.sh@8 -- # local job_section=job0 00:16:30.001 20:43:13 -- bdevperf/common.sh@9 -- # local rw= 00:16:30.001 20:43:13 -- bdevperf/common.sh@10 -- # local filename= 00:16:30.001 20:43:13 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:16:30.001 20:43:13 -- bdevperf/common.sh@18 -- # job='[job0]' 00:16:30.001 20:43:13 -- bdevperf/common.sh@19 -- # echo 00:16:30.001 20:43:13 -- bdevperf/common.sh@20 -- # cat 00:16:30.001 00:16:30.001 20:43:13 -- bdevperf/test_config.sh@19 -- # create_job job1 00:16:30.001 20:43:13 -- bdevperf/common.sh@8 -- # local job_section=job1 00:16:30.001 20:43:13 -- bdevperf/common.sh@9 -- # local rw= 00:16:30.001 20:43:13 -- bdevperf/common.sh@10 -- # local filename= 00:16:30.001 20:43:13 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:16:30.001 20:43:13 -- bdevperf/common.sh@18 -- # job='[job1]' 00:16:30.001 20:43:13 -- bdevperf/common.sh@19 -- # echo 00:16:30.001 20:43:13 -- bdevperf/common.sh@20 -- # cat 00:16:30.001 00:16:30.001 20:43:13 -- bdevperf/test_config.sh@20 -- # create_job job2 00:16:30.001 20:43:13 -- bdevperf/common.sh@8 -- # local job_section=job2 00:16:30.001 20:43:13 -- bdevperf/common.sh@9 -- # local rw= 00:16:30.001 20:43:13 -- bdevperf/common.sh@10 -- # local filename= 00:16:30.001 20:43:13 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:16:30.001 20:43:13 -- bdevperf/common.sh@18 -- # job='[job2]' 00:16:30.001 20:43:13 -- bdevperf/common.sh@19 -- # echo 00:16:30.001 20:43:13 -- bdevperf/common.sh@20 -- # cat 00:16:30.001 00:16:30.001 20:43:13 -- bdevperf/test_config.sh@21 -- # create_job job3 00:16:30.001 20:43:13 -- bdevperf/common.sh@8 -- # local job_section=job3 00:16:30.001 20:43:13 -- bdevperf/common.sh@9 -- # local rw= 00:16:30.001 20:43:13 -- bdevperf/common.sh@10 -- # local filename= 00:16:30.001 20:43:13 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:16:30.001 20:43:13 -- bdevperf/common.sh@18 -- # job='[job3]' 00:16:30.001 20:43:13 -- bdevperf/common.sh@19 -- # echo 00:16:30.001 20:43:13 -- bdevperf/common.sh@20 -- # cat 00:16:30.001 20:43:13 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:35.289 20:43:17 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-04-15 20:43:13.465602] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:35.289 [2024-04-15 20:43:13.465794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56376 ] 00:16:35.289 Using job config with 4 jobs 00:16:35.289 [2024-04-15 20:43:13.637723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.289 [2024-04-15 20:43:13.853779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.289 cpumask for '\''job0'\'' is too big 00:16:35.289 cpumask for '\''job1'\'' is too big 00:16:35.289 cpumask for '\''job2'\'' is too big 00:16:35.289 cpumask for '\''job3'\'' is too big 00:16:35.289 Running I/O for 2 seconds... 00:16:35.289 00:16:35.289 Latency(us) 00:16:35.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.289 Malloc0 : 2.00 105138.35 102.67 0.00 0.00 2433.87 486.91 3947.95 00:16:35.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.289 Malloc0 : 2.00 105122.35 102.66 0.00 0.00 2432.61 483.62 3474.20 00:16:35.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.289 Malloc0 : 2.01 105169.06 102.70 0.00 0.00 2430.30 460.59 3066.24 00:16:35.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.289 Malloc0 : 2.01 105153.86 102.69 0.00 0.00 2429.33 427.69 3276.80 00:16:35.289 =================================================================================================================== 00:16:35.289 Total : 420583.62 410.73 0.00 0.00 2431.53 427.69 3947.95' 00:16:35.289 20:43:17 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-04-15 20:43:13.465602] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:35.289 [2024-04-15 20:43:13.465794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56376 ] 00:16:35.289 Using job config with 4 jobs 00:16:35.289 [2024-04-15 20:43:13.637723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.289 [2024-04-15 20:43:13.853779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.289 cpumask for '\''job0'\'' is too big 00:16:35.289 cpumask for '\''job1'\'' is too big 00:16:35.289 cpumask for '\''job2'\'' is too big 00:16:35.289 cpumask for '\''job3'\'' is too big 00:16:35.289 Running I/O for 2 seconds... 00:16:35.289 00:16:35.289 Latency(us) 00:16:35.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.289 Malloc0 : 2.00 105138.35 102.67 0.00 0.00 2433.87 486.91 3947.95 00:16:35.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.289 Malloc0 : 2.00 105122.35 102.66 0.00 0.00 2432.61 483.62 3474.20 00:16:35.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.289 Malloc0 : 2.01 105169.06 102.70 0.00 0.00 2430.30 460.59 3066.24 00:16:35.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.289 Malloc0 : 2.01 105153.86 102.69 0.00 0.00 2429.33 427.69 3276.80 00:16:35.289 =================================================================================================================== 00:16:35.289 Total : 420583.62 410.73 0.00 0.00 2431.53 427.69 3947.95' 00:16:35.289 20:43:17 -- bdevperf/common.sh@32 -- # echo '[2024-04-15 20:43:13.465602] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:35.289 [2024-04-15 20:43:13.465794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56376 ] 00:16:35.289 Using job config with 4 jobs 00:16:35.289 [2024-04-15 20:43:13.637723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.289 [2024-04-15 20:43:13.853779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.289 cpumask for '\''job0'\'' is too big 00:16:35.289 cpumask for '\''job1'\'' is too big 00:16:35.289 cpumask for '\''job2'\'' is too big 00:16:35.289 cpumask for '\''job3'\'' is too big 00:16:35.289 Running I/O for 2 seconds... 00:16:35.289 00:16:35.289 Latency(us) 00:16:35.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.289 Malloc0 : 2.00 105138.35 102.67 0.00 0.00 2433.87 486.91 3947.95 00:16:35.289 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.290 Malloc0 : 2.00 105122.35 102.66 0.00 0.00 2432.61 483.62 3474.20 00:16:35.290 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.290 Malloc0 : 2.01 105169.06 102.70 0.00 0.00 2430.30 460.59 3066.24 00:16:35.290 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:35.290 Malloc0 : 2.01 105153.86 102.69 0.00 0.00 2429.33 427.69 3276.80 00:16:35.290 =================================================================================================================== 00:16:35.290 Total : 420583.62 410.73 0.00 0.00 2431.53 427.69 3947.95' 00:16:35.290 20:43:17 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:16:35.290 20:43:17 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:16:35.290 20:43:17 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:16:35.290 20:43:17 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:35.290 [2024-04-15 20:43:18.123467] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:35.290 [2024-04-15 20:43:18.123630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56441 ] 00:16:35.290 [2024-04-15 20:43:18.278752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.290 [2024-04-15 20:43:18.491248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.548 cpumask for 'job0' is too big 00:16:35.548 cpumask for 'job1' is too big 00:16:35.548 cpumask for 'job2' is too big 00:16:35.548 cpumask for 'job3' is too big 00:16:39.739 20:43:22 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:16:39.739 Running I/O for 2 seconds... 00:16:39.739 00:16:39.739 Latency(us) 00:16:39.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.739 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:39.739 Malloc0 : 2.00 105222.41 102.76 0.00 0.00 2431.57 552.71 4132.19 00:16:39.739 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:39.739 Malloc0 : 2.01 105206.67 102.74 0.00 0.00 2430.51 457.30 3658.44 00:16:39.739 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:39.739 Malloc0 : 2.01 105191.77 102.73 0.00 0.00 2429.68 454.01 3184.68 00:16:39.739 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:16:39.739 Malloc0 : 2.01 105176.30 102.71 0.00 0.00 2428.65 490.20 3066.24 00:16:39.739 =================================================================================================================== 00:16:39.739 Total : 420797.15 410.93 0.00 0.00 2430.10 454.01 4132.19' 00:16:39.739 20:43:22 -- bdevperf/test_config.sh@27 -- # cleanup 00:16:39.739 20:43:22 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:39.739 00:16:39.739 20:43:22 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:16:39.739 20:43:22 -- bdevperf/common.sh@8 -- # local job_section=job0 00:16:39.739 20:43:22 -- bdevperf/common.sh@9 -- # local rw=write 00:16:39.739 20:43:22 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:16:39.739 20:43:22 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:16:39.739 20:43:22 -- bdevperf/common.sh@18 -- # job='[job0]' 00:16:39.739 20:43:22 -- bdevperf/common.sh@19 -- # echo 00:16:39.739 20:43:22 -- bdevperf/common.sh@20 -- # cat 00:16:39.739 00:16:39.739 20:43:22 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:16:39.739 20:43:22 -- bdevperf/common.sh@8 -- # local job_section=job1 00:16:39.739 20:43:22 -- bdevperf/common.sh@9 -- # local rw=write 00:16:39.739 20:43:22 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:16:39.739 20:43:22 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:16:39.739 20:43:22 -- bdevperf/common.sh@18 -- # job='[job1]' 00:16:39.739 20:43:22 -- bdevperf/common.sh@19 -- # echo 00:16:39.739 20:43:22 -- bdevperf/common.sh@20 -- # cat 00:16:39.739 00:16:39.739 20:43:22 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:16:39.739 20:43:22 -- bdevperf/common.sh@8 -- # local job_section=job2 00:16:39.739 20:43:22 -- bdevperf/common.sh@9 -- # local rw=write 00:16:39.739 20:43:22 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:16:39.739 20:43:22 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:16:39.739 20:43:22 -- bdevperf/common.sh@18 -- # job='[job2]' 00:16:39.739 20:43:22 -- bdevperf/common.sh@19 -- # echo 00:16:39.739 20:43:22 -- bdevperf/common.sh@20 -- # cat 00:16:39.739 20:43:22 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:43.955 20:43:27 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-04-15 20:43:22.825136] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:43.955 [2024-04-15 20:43:22.825304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56508 ] 00:16:43.955 Using job config with 3 jobs 00:16:43.955 [2024-04-15 20:43:22.975476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.955 [2024-04-15 20:43:23.190891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.955 cpumask for '\''job0'\'' is too big 00:16:43.955 cpumask for '\''job1'\'' is too big 00:16:43.955 cpumask for '\''job2'\'' is too big 00:16:43.955 Running I/O for 2 seconds... 00:16:43.955 00:16:43.955 Latency(us) 00:16:43.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.955 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:43.955 Malloc0 : 2.00 138293.33 135.05 0.00 0.00 1849.95 480.33 2868.84 00:16:43.955 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:43.955 Malloc0 : 2.00 138272.34 135.03 0.00 0.00 1849.19 470.46 2368.77 00:16:43.955 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:43.955 Malloc0 : 2.00 138335.98 135.09 0.00 0.00 1847.24 225.36 2105.57 00:16:43.955 =================================================================================================================== 00:16:43.955 Total : 414901.66 405.18 0.00 0.00 1848.79 225.36 2868.84' 00:16:43.955 20:43:27 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-04-15 20:43:22.825136] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:43.955 [2024-04-15 20:43:22.825304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56508 ] 00:16:43.955 Using job config with 3 jobs 00:16:43.955 [2024-04-15 20:43:22.975476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.955 [2024-04-15 20:43:23.190891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.955 cpumask for '\''job0'\'' is too big 00:16:43.955 cpumask for '\''job1'\'' is too big 00:16:43.955 cpumask for '\''job2'\'' is too big 00:16:43.955 Running I/O for 2 seconds... 00:16:43.955 00:16:43.955 Latency(us) 00:16:43.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.955 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:43.956 Malloc0 : 2.00 138293.33 135.05 0.00 0.00 1849.95 480.33 2868.84 00:16:43.956 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:43.956 Malloc0 : 2.00 138272.34 135.03 0.00 0.00 1849.19 470.46 2368.77 00:16:43.956 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:43.956 Malloc0 : 2.00 138335.98 135.09 0.00 0.00 1847.24 225.36 2105.57 00:16:43.956 =================================================================================================================== 00:16:43.956 Total : 414901.66 405.18 0.00 0.00 1848.79 225.36 2868.84' 00:16:43.956 20:43:27 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:16:43.956 20:43:27 -- bdevperf/common.sh@32 -- # echo '[2024-04-15 20:43:22.825136] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:43.956 [2024-04-15 20:43:22.825304] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56508 ] 00:16:43.956 Using job config with 3 jobs 00:16:43.956 [2024-04-15 20:43:22.975476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.956 [2024-04-15 20:43:23.190891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.956 cpumask for '\''job0'\'' is too big 00:16:43.956 cpumask for '\''job1'\'' is too big 00:16:43.956 cpumask for '\''job2'\'' is too big 00:16:43.956 Running I/O for 2 seconds... 00:16:43.956 00:16:43.956 Latency(us) 00:16:43.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.956 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:43.956 Malloc0 : 2.00 138293.33 135.05 0.00 0.00 1849.95 480.33 2868.84 00:16:43.956 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:43.956 Malloc0 : 2.00 138272.34 135.03 0.00 0.00 1849.19 470.46 2368.77 00:16:43.956 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:16:43.956 Malloc0 : 2.00 138335.98 135.09 0.00 0.00 1847.24 225.36 2105.57 00:16:43.956 =================================================================================================================== 00:16:43.956 Total : 414901.66 405.18 0.00 0.00 1848.79 225.36 2868.84' 00:16:43.956 20:43:27 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:16:43.956 20:43:27 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:16:43.956 20:43:27 -- bdevperf/test_config.sh@35 -- # cleanup 00:16:43.956 20:43:27 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:43.956 20:43:27 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:16:43.956 20:43:27 -- bdevperf/common.sh@8 -- # local job_section=global 00:16:43.956 20:43:27 -- bdevperf/common.sh@9 -- # local rw=rw 00:16:43.956 20:43:27 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:16:43.956 20:43:27 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:16:43.956 20:43:27 -- bdevperf/common.sh@13 -- # cat 00:16:43.956 00:16:43.956 20:43:27 -- bdevperf/common.sh@18 -- # job='[global]' 00:16:43.956 20:43:27 -- bdevperf/common.sh@19 -- # echo 00:16:43.956 20:43:27 -- bdevperf/common.sh@20 -- # cat 00:16:43.956 00:16:43.956 20:43:27 -- bdevperf/test_config.sh@38 -- # create_job job0 00:16:43.956 20:43:27 -- bdevperf/common.sh@8 -- # local job_section=job0 00:16:43.956 20:43:27 -- bdevperf/common.sh@9 -- # local rw= 00:16:43.956 20:43:27 -- bdevperf/common.sh@10 -- # local filename= 00:16:43.956 20:43:27 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:16:43.956 20:43:27 -- bdevperf/common.sh@18 -- # job='[job0]' 00:16:43.956 20:43:27 -- bdevperf/common.sh@19 -- # echo 00:16:43.956 20:43:27 -- bdevperf/common.sh@20 -- # cat 00:16:43.956 00:16:43.956 20:43:27 -- bdevperf/test_config.sh@39 -- # create_job job1 00:16:43.956 20:43:27 -- bdevperf/common.sh@8 -- # local job_section=job1 00:16:43.956 20:43:27 -- bdevperf/common.sh@9 -- # local rw= 00:16:43.956 20:43:27 -- bdevperf/common.sh@10 -- # local filename= 00:16:43.956 20:43:27 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:16:43.956 20:43:27 -- bdevperf/common.sh@18 -- # job='[job1]' 00:16:43.956 20:43:27 -- bdevperf/common.sh@19 -- # echo 00:16:43.956 20:43:27 -- bdevperf/common.sh@20 -- # cat 00:16:43.956 20:43:27 -- bdevperf/test_config.sh@40 -- # create_job job2 00:16:43.956 20:43:27 -- bdevperf/common.sh@8 -- # local job_section=job2 00:16:43.956 20:43:27 -- bdevperf/common.sh@9 -- # local rw= 00:16:43.956 20:43:27 -- bdevperf/common.sh@10 -- # local filename= 00:16:43.956 00:16:43.956 20:43:27 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:16:43.956 20:43:27 -- bdevperf/common.sh@18 -- # job='[job2]' 00:16:43.956 20:43:27 -- bdevperf/common.sh@19 -- # echo 00:16:43.956 20:43:27 -- bdevperf/common.sh@20 -- # cat 00:16:43.956 00:16:43.956 20:43:27 -- bdevperf/test_config.sh@41 -- # create_job job3 00:16:43.956 20:43:27 -- bdevperf/common.sh@8 -- # local job_section=job3 00:16:43.956 20:43:27 -- bdevperf/common.sh@9 -- # local rw= 00:16:43.956 20:43:27 -- bdevperf/common.sh@10 -- # local filename= 00:16:43.956 20:43:27 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:16:43.956 20:43:27 -- bdevperf/common.sh@18 -- # job='[job3]' 00:16:43.956 20:43:27 -- bdevperf/common.sh@19 -- # echo 00:16:43.956 20:43:27 -- bdevperf/common.sh@20 -- # cat 00:16:43.956 20:43:27 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:49.232 20:43:32 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-04-15 20:43:27.582896] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:49.232 [2024-04-15 20:43:27.583050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56571 ] 00:16:49.232 Using job config with 4 jobs 00:16:49.232 [2024-04-15 20:43:27.736631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.232 [2024-04-15 20:43:27.952582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.232 cpumask for '\''job0'\'' is too big 00:16:49.232 cpumask for '\''job1'\'' is too big 00:16:49.232 cpumask for '\''job2'\'' is too big 00:16:49.232 cpumask for '\''job3'\'' is too big 00:16:49.232 Running I/O for 2 seconds... 00:16:49.232 00:16:49.232 Latency(us) 00:16:49.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.232 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.232 Malloc0 : 2.01 52120.92 50.90 0.00 0.00 4909.40 1138.33 8369.66 00:16:49.232 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.232 Malloc1 : 2.01 52111.84 50.89 0.00 0.00 4908.39 1302.82 8317.02 00:16:49.232 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.232 Malloc0 : 2.01 52104.35 50.88 0.00 0.00 4904.92 1072.53 7158.95 00:16:49.232 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.232 Malloc1 : 2.01 52095.84 50.87 0.00 0.00 4903.74 1210.71 7158.95 00:16:49.232 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.232 Malloc0 : 2.01 52088.17 50.87 0.00 0.00 4900.26 967.25 6185.12 00:16:49.232 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.232 Malloc1 : 2.01 52079.65 50.86 0.00 0.00 4899.67 1144.91 6185.12 00:16:49.232 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.232 Malloc0 : 2.01 52072.06 50.85 0.00 0.00 4896.10 1019.89 5342.89 00:16:49.232 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.232 Malloc1 : 2.01 52173.97 50.95 0.00 0.00 4885.51 284.58 5342.89 00:16:49.232 =================================================================================================================== 00:16:49.232 Total : 416846.81 407.08 0.00 0.00 4900.99 284.58 8369.66' 00:16:49.232 20:43:32 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-04-15 20:43:27.582896] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:49.232 [2024-04-15 20:43:27.583050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56571 ] 00:16:49.232 Using job config with 4 jobs 00:16:49.232 [2024-04-15 20:43:27.736631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.232 [2024-04-15 20:43:27.952582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.232 cpumask for '\''job0'\'' is too big 00:16:49.232 cpumask for '\''job1'\'' is too big 00:16:49.232 cpumask for '\''job2'\'' is too big 00:16:49.232 cpumask for '\''job3'\'' is too big 00:16:49.232 Running I/O for 2 seconds... 00:16:49.232 00:16:49.232 Latency(us) 00:16:49.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.232 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.232 Malloc0 : 2.01 52120.92 50.90 0.00 0.00 4909.40 1138.33 8369.66 00:16:49.232 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.232 Malloc1 : 2.01 52111.84 50.89 0.00 0.00 4908.39 1302.82 8317.02 00:16:49.232 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc0 : 2.01 52104.35 50.88 0.00 0.00 4904.92 1072.53 7158.95 00:16:49.233 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc1 : 2.01 52095.84 50.87 0.00 0.00 4903.74 1210.71 7158.95 00:16:49.233 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc0 : 2.01 52088.17 50.87 0.00 0.00 4900.26 967.25 6185.12 00:16:49.233 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc1 : 2.01 52079.65 50.86 0.00 0.00 4899.67 1144.91 6185.12 00:16:49.233 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc0 : 2.01 52072.06 50.85 0.00 0.00 4896.10 1019.89 5342.89 00:16:49.233 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc1 : 2.01 52173.97 50.95 0.00 0.00 4885.51 284.58 5342.89 00:16:49.233 =================================================================================================================== 00:16:49.233 Total : 416846.81 407.08 0.00 0.00 4900.99 284.58 8369.66' 00:16:49.233 20:43:32 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:16:49.233 20:43:32 -- bdevperf/common.sh@32 -- # echo '[2024-04-15 20:43:27.582896] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:49.233 [2024-04-15 20:43:27.583050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56571 ] 00:16:49.233 Using job config with 4 jobs 00:16:49.233 [2024-04-15 20:43:27.736631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.233 [2024-04-15 20:43:27.952582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.233 cpumask for '\''job0'\'' is too big 00:16:49.233 cpumask for '\''job1'\'' is too big 00:16:49.233 cpumask for '\''job2'\'' is too big 00:16:49.233 cpumask for '\''job3'\'' is too big 00:16:49.233 Running I/O for 2 seconds... 00:16:49.233 00:16:49.233 Latency(us) 00:16:49.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.233 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc0 : 2.01 52120.92 50.90 0.00 0.00 4909.40 1138.33 8369.66 00:16:49.233 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc1 : 2.01 52111.84 50.89 0.00 0.00 4908.39 1302.82 8317.02 00:16:49.233 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc0 : 2.01 52104.35 50.88 0.00 0.00 4904.92 1072.53 7158.95 00:16:49.233 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc1 : 2.01 52095.84 50.87 0.00 0.00 4903.74 1210.71 7158.95 00:16:49.233 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc0 : 2.01 52088.17 50.87 0.00 0.00 4900.26 967.25 6185.12 00:16:49.233 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc1 : 2.01 52079.65 50.86 0.00 0.00 4899.67 1144.91 6185.12 00:16:49.233 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc0 : 2.01 52072.06 50.85 0.00 0.00 4896.10 1019.89 5342.89 00:16:49.233 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:49.233 Malloc1 : 2.01 52173.97 50.95 0.00 0.00 4885.51 284.58 5342.89 00:16:49.233 =================================================================================================================== 00:16:49.233 Total : 416846.81 407.08 0.00 0.00 4900.99 284.58 8369.66' 00:16:49.233 20:43:32 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:16:49.233 20:43:32 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:16:49.233 20:43:32 -- bdevperf/test_config.sh@44 -- # cleanup 00:16:49.233 20:43:32 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:16:49.233 20:43:32 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:49.233 ************************************ 00:16:49.233 END TEST bdevperf_config 00:16:49.233 ************************************ 00:16:49.233 00:16:49.233 real 0m19.034s 00:16:49.233 user 0m17.059s 00:16:49.233 sys 0m1.154s 00:16:49.233 20:43:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.233 20:43:32 -- common/autotest_common.sh@10 -- # set +x 00:16:49.233 20:43:32 -- spdk/autotest.sh@198 -- # uname -s 00:16:49.233 20:43:32 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:16:49.233 20:43:32 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:16:49.233 20:43:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:49.233 20:43:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:49.233 20:43:32 -- common/autotest_common.sh@10 -- # set +x 00:16:49.233 ************************************ 00:16:49.233 START TEST reactor_set_interrupt 00:16:49.233 ************************************ 00:16:49.233 20:43:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:16:49.233 * Looking for test storage... 00:16:49.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:49.233 20:43:32 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:16:49.233 20:43:32 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:16:49.233 20:43:32 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:49.233 20:43:32 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:16:49.233 20:43:32 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:16:49.233 20:43:32 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:49.233 20:43:32 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:16:49.233 20:43:32 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:49.233 20:43:32 -- common/autotest_common.sh@34 -- # set -e 00:16:49.233 20:43:32 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:49.233 20:43:32 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:49.233 20:43:32 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:16:49.233 20:43:32 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:16:49.233 20:43:32 -- common/build_config.sh@1 -- # CONFIG_RDMA=y 00:16:49.233 20:43:32 -- common/build_config.sh@2 -- # CONFIG_UNIT_TESTS=y 00:16:49.233 20:43:32 -- common/build_config.sh@3 -- # CONFIG_GOLANG=n 00:16:49.233 20:43:32 -- common/build_config.sh@4 -- # CONFIG_FUSE=n 00:16:49.233 20:43:32 -- common/build_config.sh@5 -- # CONFIG_ISAL=n 00:16:49.233 20:43:32 -- common/build_config.sh@6 -- # CONFIG_VTUNE_DIR= 00:16:49.233 20:43:32 -- common/build_config.sh@7 -- # CONFIG_CUSTOMOCF=n 00:16:49.233 20:43:32 -- common/build_config.sh@8 -- # CONFIG_IPSEC_MB_DIR= 00:16:49.233 20:43:32 -- common/build_config.sh@9 -- # CONFIG_VBDEV_COMPRESS=n 00:16:49.233 20:43:32 -- common/build_config.sh@10 -- # CONFIG_OCF_PATH= 00:16:49.233 20:43:32 -- common/build_config.sh@11 -- # CONFIG_SHARED=n 00:16:49.233 20:43:32 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:16:49.233 20:43:32 -- common/build_config.sh@13 -- # CONFIG_TESTS=y 00:16:49.233 20:43:32 -- common/build_config.sh@14 -- # CONFIG_APPS=y 00:16:49.233 20:43:32 -- common/build_config.sh@15 -- # CONFIG_ISAL_CRYPTO=n 00:16:49.233 20:43:32 -- common/build_config.sh@16 -- # CONFIG_LIBDIR= 00:16:49.233 20:43:32 -- common/build_config.sh@17 -- # CONFIG_DPDK_COMPRESSDEV=n 00:16:49.233 20:43:32 -- common/build_config.sh@18 -- # CONFIG_DAOS_DIR= 00:16:49.233 20:43:32 -- common/build_config.sh@19 -- # CONFIG_ISCSI_INITIATOR=n 00:16:49.233 20:43:32 -- common/build_config.sh@20 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:49.233 20:43:32 -- common/build_config.sh@21 -- # CONFIG_ASAN=y 00:16:49.233 20:43:32 -- common/build_config.sh@22 -- # CONFIG_LTO=n 00:16:49.233 20:43:32 -- common/build_config.sh@23 -- # CONFIG_CET=n 00:16:49.233 20:43:32 -- common/build_config.sh@24 -- # CONFIG_FUZZER=n 00:16:49.233 20:43:32 -- common/build_config.sh@25 -- # CONFIG_USDT=n 00:16:49.233 20:43:32 -- common/build_config.sh@26 -- # CONFIG_VTUNE=n 00:16:49.233 20:43:32 -- common/build_config.sh@27 -- # CONFIG_VHOST=y 00:16:49.233 20:43:32 -- common/build_config.sh@28 -- # CONFIG_WPDK_DIR= 00:16:49.233 20:43:32 -- common/build_config.sh@29 -- # CONFIG_UBLK=n 00:16:49.233 20:43:32 -- common/build_config.sh@30 -- # CONFIG_URING=n 00:16:49.233 20:43:32 -- common/build_config.sh@31 -- # CONFIG_SMA=n 00:16:49.233 20:43:32 -- common/build_config.sh@32 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:49.233 20:43:32 -- common/build_config.sh@33 -- # CONFIG_IDXD_KERNEL=n 00:16:49.233 20:43:32 -- common/build_config.sh@34 -- # CONFIG_FC_PATH= 00:16:49.233 20:43:32 -- common/build_config.sh@35 -- # CONFIG_PREFIX=/usr/local 00:16:49.233 20:43:32 -- common/build_config.sh@36 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:16:49.233 20:43:32 -- common/build_config.sh@37 -- # CONFIG_XNVME=n 00:16:49.233 20:43:32 -- common/build_config.sh@38 -- # CONFIG_RDMA_PROV=verbs 00:16:49.233 20:43:32 -- common/build_config.sh@39 -- # CONFIG_RDMA_SET_TOS=y 00:16:49.233 20:43:32 -- common/build_config.sh@40 -- # CONFIG_FUZZER_LIB= 00:16:49.233 20:43:32 -- common/build_config.sh@41 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:49.233 20:43:32 -- common/build_config.sh@42 -- # CONFIG_ARCH=native 00:16:49.233 20:43:32 -- common/build_config.sh@43 -- # CONFIG_PGO_CAPTURE=n 00:16:49.233 20:43:32 -- common/build_config.sh@44 -- # CONFIG_DAOS=y 00:16:49.233 20:43:32 -- common/build_config.sh@45 -- # CONFIG_WERROR=y 00:16:49.233 20:43:32 -- common/build_config.sh@46 -- # CONFIG_DEBUG=y 00:16:49.233 20:43:32 -- common/build_config.sh@47 -- # CONFIG_AVAHI=n 00:16:49.233 20:43:32 -- common/build_config.sh@48 -- # CONFIG_CROSS_PREFIX= 00:16:49.233 20:43:32 -- common/build_config.sh@49 -- # CONFIG_PGO_USE=n 00:16:49.233 20:43:32 -- common/build_config.sh@50 -- # CONFIG_CRYPTO=n 00:16:49.233 20:43:32 -- common/build_config.sh@51 -- # CONFIG_HAVE_ARC4RANDOM=n 00:16:49.233 20:43:32 -- common/build_config.sh@52 -- # CONFIG_OPENSSL_PATH= 00:16:49.233 20:43:32 -- common/build_config.sh@53 -- # CONFIG_EXAMPLES=y 00:16:49.233 20:43:32 -- common/build_config.sh@54 -- # CONFIG_DPDK_INC_DIR= 00:16:49.233 20:43:32 -- common/build_config.sh@55 -- # CONFIG_MAX_LCORES= 00:16:49.233 20:43:32 -- common/build_config.sh@56 -- # CONFIG_VIRTIO=y 00:16:49.234 20:43:32 -- common/build_config.sh@57 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:49.234 20:43:32 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB=n 00:16:49.234 20:43:32 -- common/build_config.sh@59 -- # CONFIG_UBSAN=n 00:16:49.234 20:43:32 -- common/build_config.sh@60 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:49.234 20:43:32 -- common/build_config.sh@61 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:16:49.234 20:43:32 -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:16:49.234 20:43:32 -- common/build_config.sh@63 -- # CONFIG_URING_PATH= 00:16:49.234 20:43:32 -- common/build_config.sh@64 -- # CONFIG_NVME_CUSE=y 00:16:49.234 20:43:32 -- common/build_config.sh@65 -- # CONFIG_URING_ZNS=n 00:16:49.234 20:43:32 -- common/build_config.sh@66 -- # CONFIG_VFIO_USER=n 00:16:49.234 20:43:32 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:16:49.234 20:43:32 -- common/build_config.sh@68 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:16:49.234 20:43:32 -- common/build_config.sh@69 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:16:49.234 20:43:32 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:16:49.234 20:43:32 -- common/build_config.sh@71 -- # CONFIG_RAID5F=n 00:16:49.234 20:43:32 -- common/build_config.sh@72 -- # CONFIG_VFIO_USER_DIR= 00:16:49.234 20:43:32 -- common/build_config.sh@73 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:49.234 20:43:32 -- common/build_config.sh@74 -- # CONFIG_TSAN=n 00:16:49.234 20:43:32 -- common/build_config.sh@75 -- # CONFIG_IDXD=y 00:16:49.234 20:43:32 -- common/build_config.sh@76 -- # CONFIG_OCF=n 00:16:49.234 20:43:32 -- common/build_config.sh@77 -- # CONFIG_CRYPTO_MLX5=n 00:16:49.234 20:43:32 -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:16:49.234 20:43:32 -- common/build_config.sh@79 -- # CONFIG_COVERAGE=y 00:16:49.234 20:43:32 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:49.234 20:43:32 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:16:49.234 20:43:32 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:16:49.234 20:43:32 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:16:49.234 20:43:32 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:16:49.234 20:43:32 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:16:49.234 20:43:32 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:16:49.234 20:43:32 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:16:49.234 20:43:32 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:49.234 20:43:32 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:49.234 20:43:32 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:49.234 20:43:32 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:49.234 20:43:32 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:49.234 20:43:32 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:49.234 20:43:32 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:16:49.234 20:43:32 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:49.234 #define SPDK_CONFIG_H 00:16:49.234 #define SPDK_CONFIG_APPS 1 00:16:49.234 #define SPDK_CONFIG_ARCH native 00:16:49.234 #define SPDK_CONFIG_ASAN 1 00:16:49.234 #undef SPDK_CONFIG_AVAHI 00:16:49.234 #undef SPDK_CONFIG_CET 00:16:49.234 #define SPDK_CONFIG_COVERAGE 1 00:16:49.234 #define SPDK_CONFIG_CROSS_PREFIX 00:16:49.234 #undef SPDK_CONFIG_CRYPTO 00:16:49.234 #undef SPDK_CONFIG_CRYPTO_MLX5 00:16:49.234 #undef SPDK_CONFIG_CUSTOMOCF 00:16:49.234 #define SPDK_CONFIG_DAOS 1 00:16:49.234 #define SPDK_CONFIG_DAOS_DIR 00:16:49.234 #define SPDK_CONFIG_DEBUG 1 00:16:49.234 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:16:49.234 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:16:49.234 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:49.234 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:49.234 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:49.234 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:16:49.234 #define SPDK_CONFIG_EXAMPLES 1 00:16:49.234 #undef SPDK_CONFIG_FC 00:16:49.234 #define SPDK_CONFIG_FC_PATH 00:16:49.234 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:49.234 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:49.234 #undef SPDK_CONFIG_FUSE 00:16:49.234 #undef SPDK_CONFIG_FUZZER 00:16:49.234 #define SPDK_CONFIG_FUZZER_LIB 00:16:49.234 #undef SPDK_CONFIG_GOLANG 00:16:49.234 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:16:49.234 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:49.234 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:49.234 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:49.234 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:16:49.234 #define SPDK_CONFIG_IDXD 1 00:16:49.234 #undef SPDK_CONFIG_IDXD_KERNEL 00:16:49.234 #undef SPDK_CONFIG_IPSEC_MB 00:16:49.234 #define SPDK_CONFIG_IPSEC_MB_DIR 00:16:49.234 #undef SPDK_CONFIG_ISAL 00:16:49.234 #undef SPDK_CONFIG_ISAL_CRYPTO 00:16:49.234 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:16:49.234 #define SPDK_CONFIG_LIBDIR 00:16:49.234 #undef SPDK_CONFIG_LTO 00:16:49.234 #define SPDK_CONFIG_MAX_LCORES 00:16:49.234 #define SPDK_CONFIG_NVME_CUSE 1 00:16:49.234 #undef SPDK_CONFIG_OCF 00:16:49.234 #define SPDK_CONFIG_OCF_PATH 00:16:49.234 #define SPDK_CONFIG_OPENSSL_PATH 00:16:49.234 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:49.234 #undef SPDK_CONFIG_PGO_USE 00:16:49.234 #define SPDK_CONFIG_PREFIX /usr/local 00:16:49.234 #undef SPDK_CONFIG_RAID5F 00:16:49.234 #undef SPDK_CONFIG_RBD 00:16:49.234 #define SPDK_CONFIG_RDMA 1 00:16:49.234 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:49.234 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:49.234 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:16:49.234 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:49.234 #undef SPDK_CONFIG_SHARED 00:16:49.234 #undef SPDK_CONFIG_SMA 00:16:49.234 #define SPDK_CONFIG_TESTS 1 00:16:49.234 #undef SPDK_CONFIG_TSAN 00:16:49.234 #undef SPDK_CONFIG_UBLK 00:16:49.234 #undef SPDK_CONFIG_UBSAN 00:16:49.234 #define SPDK_CONFIG_UNIT_TESTS 1 00:16:49.234 #undef SPDK_CONFIG_URING 00:16:49.234 #define SPDK_CONFIG_URING_PATH 00:16:49.234 #undef SPDK_CONFIG_URING_ZNS 00:16:49.234 #undef SPDK_CONFIG_USDT 00:16:49.234 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:16:49.234 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:16:49.234 #undef SPDK_CONFIG_VFIO_USER 00:16:49.234 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:49.234 #define SPDK_CONFIG_VHOST 1 00:16:49.234 #define SPDK_CONFIG_VIRTIO 1 00:16:49.234 #undef SPDK_CONFIG_VTUNE 00:16:49.234 #define SPDK_CONFIG_VTUNE_DIR 00:16:49.234 #define SPDK_CONFIG_WERROR 1 00:16:49.234 #define SPDK_CONFIG_WPDK_DIR 00:16:49.234 #undef SPDK_CONFIG_XNVME 00:16:49.234 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:49.234 20:43:32 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:49.234 20:43:32 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:49.234 20:43:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.234 20:43:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.234 20:43:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.234 20:43:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:49.234 20:43:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:49.234 20:43:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:49.234 20:43:32 -- paths/export.sh@5 -- # export PATH 00:16:49.234 20:43:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:16:49.234 20:43:32 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:49.234 20:43:32 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:16:49.234 20:43:32 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:49.234 20:43:32 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:16:49.234 20:43:32 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:16:49.234 20:43:32 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:16:49.234 20:43:32 -- pm/common@16 -- # TEST_TAG=N/A 00:16:49.234 20:43:32 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:16:49.234 20:43:32 -- common/autotest_common.sh@52 -- # : 1 00:16:49.234 20:43:32 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:16:49.234 20:43:32 -- common/autotest_common.sh@56 -- # : 0 00:16:49.234 20:43:32 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:49.234 20:43:32 -- common/autotest_common.sh@58 -- # : 0 00:16:49.234 20:43:32 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:16:49.234 20:43:32 -- common/autotest_common.sh@60 -- # : 1 00:16:49.234 20:43:32 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:49.234 20:43:32 -- common/autotest_common.sh@62 -- # : 1 00:16:49.234 20:43:32 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:16:49.234 20:43:32 -- common/autotest_common.sh@64 -- # : 00:16:49.234 20:43:32 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:16:49.234 20:43:32 -- common/autotest_common.sh@66 -- # : 0 00:16:49.234 20:43:32 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:16:49.234 20:43:32 -- common/autotest_common.sh@68 -- # : 0 00:16:49.234 20:43:32 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:16:49.234 20:43:32 -- common/autotest_common.sh@70 -- # : 0 00:16:49.234 20:43:32 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:16:49.234 20:43:32 -- common/autotest_common.sh@72 -- # : 0 00:16:49.234 20:43:32 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:49.234 20:43:32 -- common/autotest_common.sh@74 -- # : 0 00:16:49.234 20:43:32 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:16:49.234 20:43:32 -- common/autotest_common.sh@76 -- # : 0 00:16:49.234 20:43:32 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:16:49.234 20:43:32 -- common/autotest_common.sh@78 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:16:49.235 20:43:32 -- common/autotest_common.sh@80 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:16:49.235 20:43:32 -- common/autotest_common.sh@82 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:16:49.235 20:43:32 -- common/autotest_common.sh@84 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:16:49.235 20:43:32 -- common/autotest_common.sh@86 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:16:49.235 20:43:32 -- common/autotest_common.sh@88 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:16:49.235 20:43:32 -- common/autotest_common.sh@90 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:49.235 20:43:32 -- common/autotest_common.sh@92 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:16:49.235 20:43:32 -- common/autotest_common.sh@94 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:16:49.235 20:43:32 -- common/autotest_common.sh@96 -- # : rdma 00:16:49.235 20:43:32 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:49.235 20:43:32 -- common/autotest_common.sh@98 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:16:49.235 20:43:32 -- common/autotest_common.sh@100 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:16:49.235 20:43:32 -- common/autotest_common.sh@102 -- # : 1 00:16:49.235 20:43:32 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:16:49.235 20:43:32 -- common/autotest_common.sh@104 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:16:49.235 20:43:32 -- common/autotest_common.sh@106 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:16:49.235 20:43:32 -- common/autotest_common.sh@108 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:16:49.235 20:43:32 -- common/autotest_common.sh@110 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:16:49.235 20:43:32 -- common/autotest_common.sh@112 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:49.235 20:43:32 -- common/autotest_common.sh@114 -- # : 1 00:16:49.235 20:43:32 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:16:49.235 20:43:32 -- common/autotest_common.sh@116 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:16:49.235 20:43:32 -- common/autotest_common.sh@118 -- # : 00:16:49.235 20:43:32 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:49.235 20:43:32 -- common/autotest_common.sh@120 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:16:49.235 20:43:32 -- common/autotest_common.sh@122 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:16:49.235 20:43:32 -- common/autotest_common.sh@124 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:16:49.235 20:43:32 -- common/autotest_common.sh@126 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:16:49.235 20:43:32 -- common/autotest_common.sh@128 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:16:49.235 20:43:32 -- common/autotest_common.sh@130 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:16:49.235 20:43:32 -- common/autotest_common.sh@132 -- # : 00:16:49.235 20:43:32 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:16:49.235 20:43:32 -- common/autotest_common.sh@134 -- # : true 00:16:49.235 20:43:32 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:16:49.235 20:43:32 -- common/autotest_common.sh@136 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:16:49.235 20:43:32 -- common/autotest_common.sh@138 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:16:49.235 20:43:32 -- common/autotest_common.sh@140 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:16:49.235 20:43:32 -- common/autotest_common.sh@142 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:16:49.235 20:43:32 -- common/autotest_common.sh@144 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:16:49.235 20:43:32 -- common/autotest_common.sh@146 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:16:49.235 20:43:32 -- common/autotest_common.sh@148 -- # : 00:16:49.235 20:43:32 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:16:49.235 20:43:32 -- common/autotest_common.sh@150 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:16:49.235 20:43:32 -- common/autotest_common.sh@152 -- # : 1 00:16:49.235 20:43:32 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:16:49.235 20:43:32 -- common/autotest_common.sh@154 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:16:49.235 20:43:32 -- common/autotest_common.sh@156 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:16:49.235 20:43:32 -- common/autotest_common.sh@158 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:16:49.235 20:43:32 -- common/autotest_common.sh@160 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:16:49.235 20:43:32 -- common/autotest_common.sh@163 -- # : 00:16:49.235 20:43:32 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:16:49.235 20:43:32 -- common/autotest_common.sh@165 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:16:49.235 20:43:32 -- common/autotest_common.sh@167 -- # : 0 00:16:49.235 20:43:32 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:49.235 20:43:32 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:49.235 20:43:32 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:16:49.235 20:43:32 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:49.235 20:43:32 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:16:49.235 20:43:32 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:49.235 20:43:32 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:49.235 20:43:32 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:49.235 20:43:32 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:16:49.235 20:43:32 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:49.235 20:43:32 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:49.235 20:43:32 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:49.235 20:43:32 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:49.235 20:43:32 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:49.235 20:43:32 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:16:49.235 20:43:32 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:49.235 20:43:32 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:49.235 20:43:32 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:49.235 20:43:32 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:49.235 20:43:32 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:49.235 20:43:32 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:16:49.235 20:43:32 -- common/autotest_common.sh@196 -- # cat 00:16:49.235 20:43:32 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:16:49.235 20:43:32 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:49.235 20:43:32 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:49.235 20:43:32 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:49.235 20:43:32 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:49.235 20:43:32 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:16:49.235 20:43:32 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:16:49.235 20:43:32 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:49.235 20:43:32 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:16:49.235 20:43:32 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:49.235 20:43:32 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:16:49.235 20:43:32 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:16:49.235 20:43:32 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:16:49.235 20:43:32 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:16:49.235 20:43:32 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:16:49.235 20:43:32 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:49.235 20:43:32 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:16:49.235 20:43:32 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:49.235 20:43:32 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:49.235 20:43:32 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:16:49.235 20:43:32 -- common/autotest_common.sh@249 -- # export valgrind= 00:16:49.235 20:43:32 -- common/autotest_common.sh@249 -- # valgrind= 00:16:49.236 20:43:32 -- common/autotest_common.sh@255 -- # uname -s 00:16:49.236 20:43:32 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:16:49.236 20:43:32 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:16:49.236 20:43:32 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:16:49.236 20:43:32 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:16:49.236 20:43:32 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@265 -- # MAKE=make 00:16:49.236 20:43:32 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:16:49.236 20:43:32 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:16:49.236 20:43:32 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:16:49.236 20:43:32 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:16:49.236 20:43:32 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:16:49.236 20:43:32 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:16:49.236 20:43:32 -- common/autotest_common.sh@309 -- # [[ -z 56681 ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@309 -- # kill -0 56681 00:16:49.236 20:43:32 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:16:49.236 20:43:32 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:16:49.236 20:43:32 -- common/autotest_common.sh@322 -- # local mount target_dir 00:16:49.236 20:43:32 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:16:49.236 20:43:32 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:16:49.236 20:43:32 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:16:49.236 20:43:32 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:16:49.236 20:43:32 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.I4q8NB 00:16:49.236 20:43:32 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:49.236 20:43:32 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.I4q8NB/tests/interrupt /tmp/spdk.I4q8NB 00:16:49.236 20:43:32 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:16:49.236 20:43:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:49.236 20:43:32 -- common/autotest_common.sh@318 -- # df -T 00:16:49.236 20:43:32 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267637760 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267637760 00:16:49.236 20:43:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:16:49.236 20:43:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=6295592960 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:16:49.236 20:43:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:16:49.236 20:43:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=6277238784 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:16:49.236 20:43:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=20946944 00:16:49.236 20:43:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=6298185728 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:16:49.236 20:43:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:16:49.236 20:43:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=xfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=14369148928 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=21463302144 00:16:49.236 20:43:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=7094153216 00:16:49.236 20:43:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=1259638784 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1259638784 00:16:49.236 20:43:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:16:49.236 20:43:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:16:49.236 20:43:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=93643177984 00:16:49.236 20:43:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:16:49.236 20:43:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=6059601920 00:16:49.236 20:43:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:49.236 20:43:32 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:16:49.236 * Looking for test storage... 00:16:49.236 20:43:32 -- common/autotest_common.sh@359 -- # local target_space new_size 00:16:49.236 20:43:32 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:16:49.236 20:43:32 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:49.236 20:43:32 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:49.236 20:43:32 -- common/autotest_common.sh@363 -- # mount=/ 00:16:49.236 20:43:32 -- common/autotest_common.sh@365 -- # target_space=14369148928 00:16:49.236 20:43:32 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:16:49.236 20:43:32 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:16:49.236 20:43:32 -- common/autotest_common.sh@371 -- # [[ xfs == tmpfs ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@371 -- # [[ xfs == ramfs ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@372 -- # new_size=9308745728 00:16:49.236 20:43:32 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:49.236 20:43:32 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:16:49.236 20:43:32 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:16:49.236 20:43:32 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:49.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:16:49.236 20:43:32 -- common/autotest_common.sh@380 -- # return 0 00:16:49.236 20:43:32 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:16:49.236 20:43:32 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:16:49.236 20:43:32 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:49.236 20:43:32 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:49.236 20:43:32 -- common/autotest_common.sh@1672 -- # true 00:16:49.236 20:43:32 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:16:49.236 20:43:32 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:49.236 20:43:32 -- common/autotest_common.sh@27 -- # exec 00:16:49.236 20:43:32 -- common/autotest_common.sh@29 -- # exec 00:16:49.236 20:43:32 -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:49.236 20:43:32 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:49.236 20:43:32 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:49.236 20:43:32 -- common/autotest_common.sh@18 -- # set -x 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:16:49.236 20:43:32 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:16:49.236 20:43:32 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:16:49.236 20:43:32 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=56722 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 56722 /var/tmp/spdk.sock 00:16:49.236 20:43:32 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:16:49.236 20:43:32 -- common/autotest_common.sh@819 -- # '[' -z 56722 ']' 00:16:49.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.236 20:43:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.236 20:43:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:49.236 20:43:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.236 20:43:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:49.236 20:43:32 -- common/autotest_common.sh@10 -- # set +x 00:16:49.237 [2024-04-15 20:43:32.674244] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:49.237 [2024-04-15 20:43:32.674425] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56722 ] 00:16:49.495 [2024-04-15 20:43:32.833877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:49.754 [2024-04-15 20:43:33.053623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.754 [2024-04-15 20:43:33.053628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.754 [2024-04-15 20:43:33.053693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.012 [2024-04-15 20:43:33.378176] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:50.578 20:43:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:50.578 20:43:34 -- common/autotest_common.sh@852 -- # return 0 00:16:50.578 20:43:34 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:16:50.578 20:43:34 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.836 Malloc0 00:16:50.836 Malloc1 00:16:50.836 Malloc2 00:16:50.836 20:43:34 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:16:50.836 20:43:34 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:16:50.836 20:43:34 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:16:50.836 20:43:34 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:16:51.094 5000+0 records in 00:16:51.094 5000+0 records out 00:16:51.094 10240000 bytes (10 MB) copied, 0.0318648 s, 321 MB/s 00:16:51.094 20:43:34 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:16:51.094 AIO0 00:16:51.094 20:43:34 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 56722 00:16:51.094 20:43:34 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 56722 without_thd 00:16:51.094 20:43:34 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=56722 00:16:51.094 20:43:34 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:16:51.094 20:43:34 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:16:51.094 20:43:34 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:16:51.094 20:43:34 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:16:51.094 20:43:34 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:16:51.094 20:43:34 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:16:51.094 20:43:34 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:51.094 20:43:34 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:16:51.094 20:43:34 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:51.352 20:43:34 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:16:51.353 20:43:34 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:16:51.353 20:43:34 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:16:51.353 20:43:34 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:16:51.353 20:43:34 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:16:51.353 20:43:34 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:16:51.353 20:43:34 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:51.353 20:43:34 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:16:51.353 20:43:34 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:51.610 20:43:34 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:16:51.611 spdk_thread ids are 1 on reactor0. 00:16:51.611 20:43:34 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:16:51.611 20:43:34 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:16:51.611 20:43:34 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:51.611 20:43:34 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 56722 0 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 56722 0 idle 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@33 -- # local pid=56722 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56722 -w 256 00:16:51.611 20:43:34 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56722 root 20 0 20.1t 121528 11172 S 0.0 1.0 0:00.79 reactor_0' 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@48 -- # echo 56722 root 20 0 20.1t 121528 11172 S 0.0 1.0 0:00.79 reactor_0 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:51.611 20:43:35 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:51.611 20:43:35 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 56722 1 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 56722 1 idle 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@33 -- # local pid=56722 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56722 -w 256 00:16:51.611 20:43:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56735 root 20 0 20.1t 121528 11172 S 0.0 1.0 0:00.00 reactor_1' 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@48 -- # echo 56735 root 20 0 20.1t 121528 11172 S 0.0 1.0 0:00.00 reactor_1 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:51.869 20:43:35 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:51.869 20:43:35 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 56722 2 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 56722 2 idle 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@33 -- # local pid=56722 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56722 -w 256 00:16:51.869 20:43:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:52.127 20:43:35 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56736 root 20 0 20.1t 121528 11172 S 0.0 1.0 0:00.00 reactor_2' 00:16:52.127 20:43:35 -- interrupt/interrupt_common.sh@48 -- # echo 56736 root 20 0 20.1t 121528 11172 S 0.0 1.0 0:00.00 reactor_2 00:16:52.127 20:43:35 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:52.127 20:43:35 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:52.127 20:43:35 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:52.127 20:43:35 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:52.127 20:43:35 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:52.127 20:43:35 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:52.127 20:43:35 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:52.127 20:43:35 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:52.127 20:43:35 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:16:52.127 20:43:35 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:16:52.127 20:43:35 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:16:52.127 [2024-04-15 20:43:35.574756] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:52.127 20:43:35 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:16:52.386 [2024-04-15 20:43:35.722531] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:16:52.386 [2024-04-15 20:43:35.723833] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:52.386 20:43:35 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:16:52.386 [2024-04-15 20:43:35.886419] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:16:52.644 [2024-04-15 20:43:35.887499] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:52.644 20:43:35 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:52.644 20:43:35 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 56722 0 00:16:52.644 20:43:35 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 56722 0 busy 00:16:52.644 20:43:35 -- interrupt/interrupt_common.sh@33 -- # local pid=56722 00:16:52.645 20:43:35 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:52.645 20:43:35 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:16:52.645 20:43:35 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:16:52.645 20:43:35 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:52.645 20:43:35 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:52.645 20:43:35 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:52.645 20:43:35 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:52.645 20:43:35 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56722 -w 256 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56722 root 20 0 20.1t 121648 11172 R 99.9 1.0 0:01.12 reactor_0' 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@48 -- # echo 56722 root 20 0 20.1t 121648 11172 R 99.9 1.0 0:01.12 reactor_0 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:52.645 20:43:36 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:52.645 20:43:36 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 56722 2 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 56722 2 busy 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@33 -- # local pid=56722 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56722 -w 256 00:16:52.645 20:43:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:52.903 20:43:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56736 root 20 0 20.1t 121648 11172 R 93.8 1.0 0:00.34 reactor_2' 00:16:52.903 20:43:36 -- interrupt/interrupt_common.sh@48 -- # echo 56736 root 20 0 20.1t 121648 11172 R 93.8 1.0 0:00.34 reactor_2 00:16:52.903 20:43:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:52.903 20:43:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:52.903 20:43:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.8 00:16:52.903 20:43:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:16:52.903 20:43:36 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:16:52.903 20:43:36 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:16:52.903 20:43:36 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:16:52.903 20:43:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:52.903 20:43:36 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:16:52.903 [2024-04-15 20:43:36.402439] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:16:53.161 [2024-04-15 20:43:36.404631] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:53.161 20:43:36 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:16:53.161 20:43:36 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 56722 2 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 56722 2 idle 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@33 -- # local pid=56722 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56722 -w 256 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56736 root 20 0 20.1t 122268 11172 S 0.0 1.0 0:00.51 reactor_2' 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@48 -- # echo 56736 root 20 0 20.1t 122268 11172 S 0.0 1.0 0:00.51 reactor_2 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:53.161 20:43:36 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:53.161 20:43:36 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:16:53.420 [2024-04-15 20:43:36.746418] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:16:53.420 [2024-04-15 20:43:36.748294] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:53.420 20:43:36 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:16:53.420 20:43:36 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:16:53.420 20:43:36 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:16:53.420 [2024-04-15 20:43:36.894743] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:53.420 20:43:36 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 56722 0 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 56722 0 idle 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@33 -- # local pid=56722 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:53.420 20:43:36 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56722 -w 256 00:16:53.679 20:43:37 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56722 root 20 0 20.1t 122352 11172 S 0.0 1.0 0:01.82 reactor_0' 00:16:53.679 20:43:37 -- interrupt/interrupt_common.sh@48 -- # echo 56722 root 20 0 20.1t 122352 11172 S 0.0 1.0 0:01.82 reactor_0 00:16:53.679 20:43:37 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:53.679 20:43:37 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:53.679 20:43:37 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:53.679 20:43:37 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:53.679 20:43:37 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:53.679 20:43:37 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:53.679 20:43:37 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:53.679 20:43:37 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:53.679 20:43:37 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:16:53.679 20:43:37 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:16:53.679 20:43:37 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:16:53.679 20:43:37 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 56722 00:16:53.679 20:43:37 -- common/autotest_common.sh@926 -- # '[' -z 56722 ']' 00:16:53.679 20:43:37 -- common/autotest_common.sh@930 -- # kill -0 56722 00:16:53.679 20:43:37 -- common/autotest_common.sh@931 -- # uname 00:16:53.680 20:43:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:53.680 20:43:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56722 00:16:53.680 killing process with pid 56722 00:16:53.680 20:43:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:53.680 20:43:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:53.680 20:43:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56722' 00:16:53.680 20:43:37 -- common/autotest_common.sh@945 -- # kill 56722 00:16:53.680 20:43:37 -- common/autotest_common.sh@950 -- # wait 56722 00:16:55.584 20:43:38 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:16:55.584 20:43:38 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:16:55.584 20:43:38 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:16:55.584 20:43:38 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.585 20:43:38 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:16:55.585 20:43:38 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=56881 00:16:55.585 20:43:38 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.585 20:43:38 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 56881 /var/tmp/spdk.sock 00:16:55.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.585 20:43:38 -- common/autotest_common.sh@819 -- # '[' -z 56881 ']' 00:16:55.585 20:43:38 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:16:55.585 20:43:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.585 20:43:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:55.585 20:43:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.585 20:43:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:55.585 20:43:38 -- common/autotest_common.sh@10 -- # set +x 00:16:55.585 [2024-04-15 20:43:38.863124] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:55.585 [2024-04-15 20:43:38.863284] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56881 ] 00:16:55.585 [2024-04-15 20:43:39.058135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:55.844 [2024-04-15 20:43:39.267739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.844 [2024-04-15 20:43:39.267837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.844 [2024-04-15 20:43:39.267836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.104 [2024-04-15 20:43:39.586193] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:56.363 20:43:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:56.363 20:43:39 -- common/autotest_common.sh@852 -- # return 0 00:16:56.363 20:43:39 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:16:56.363 20:43:39 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:56.622 Malloc0 00:16:56.622 Malloc1 00:16:56.622 Malloc2 00:16:56.622 20:43:39 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:16:56.622 20:43:39 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:16:56.622 20:43:39 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:16:56.622 20:43:39 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:16:56.622 5000+0 records in 00:16:56.622 5000+0 records out 00:16:56.622 10240000 bytes (10 MB) copied, 0.0308389 s, 332 MB/s 00:16:56.622 20:43:39 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:16:56.881 AIO0 00:16:56.881 20:43:40 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 56881 00:16:56.881 20:43:40 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 56881 00:16:56.881 20:43:40 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=56881 00:16:56.881 20:43:40 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:16:56.881 20:43:40 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:16:56.881 20:43:40 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:16:56.881 20:43:40 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:16:56.881 20:43:40 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:16:56.881 20:43:40 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:16:56.881 20:43:40 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:56.881 20:43:40 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:16:56.881 20:43:40 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:16:57.141 20:43:40 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:16:57.141 20:43:40 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:16:57.141 spdk_thread ids are 1 on reactor0. 00:16:57.141 20:43:40 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:16:57.141 20:43:40 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:16:57.141 20:43:40 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:57.141 20:43:40 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 56881 0 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 56881 0 idle 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@33 -- # local pid=56881 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56881 -w 256 00:16:57.141 20:43:40 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:57.400 20:43:40 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56881 root 20 0 20.1t 121464 11172 S 0.0 1.0 0:00.78 reactor_0' 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@48 -- # echo 56881 root 20 0 20.1t 121464 11172 S 0.0 1.0 0:00.78 reactor_0 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:57.401 20:43:40 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:57.401 20:43:40 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 56881 1 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 56881 1 idle 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@33 -- # local pid=56881 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56881 -w 256 00:16:57.401 20:43:40 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56895 root 20 0 20.1t 121464 11172 S 0.0 1.0 0:00.00 reactor_1' 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@48 -- # echo 56895 root 20 0 20.1t 121464 11172 S 0.0 1.0 0:00.00 reactor_1 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:57.660 20:43:40 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:57.660 20:43:40 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 56881 2 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 56881 2 idle 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@33 -- # local pid=56881 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56881 -w 256 00:16:57.660 20:43:40 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:57.660 20:43:41 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56896 root 20 0 20.1t 121464 11172 S 0.0 1.0 0:00.00 reactor_2' 00:16:57.660 20:43:41 -- interrupt/interrupt_common.sh@48 -- # echo 56896 root 20 0 20.1t 121464 11172 S 0.0 1.0 0:00.00 reactor_2 00:16:57.660 20:43:41 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:57.660 20:43:41 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:57.660 20:43:41 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:57.660 20:43:41 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:57.660 20:43:41 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:57.660 20:43:41 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:57.660 20:43:41 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:57.660 20:43:41 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:57.660 20:43:41 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:16:57.660 20:43:41 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:16:57.918 [2024-04-15 20:43:41.286402] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:16:57.919 [2024-04-15 20:43:41.286684] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:16:57.919 [2024-04-15 20:43:41.288333] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:57.919 20:43:41 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:16:58.178 [2024-04-15 20:43:41.446013] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:16:58.178 [2024-04-15 20:43:41.446479] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:58.178 20:43:41 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:58.178 20:43:41 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 56881 0 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 56881 0 busy 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@33 -- # local pid=56881 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56881 -w 256 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56881 root 20 0 20.1t 121544 11184 R 99.9 1.0 0:01.14 reactor_0' 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@48 -- # echo 56881 root 20 0 20.1t 121544 11184 R 99.9 1.0 0:01.14 reactor_0 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:58.178 20:43:41 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:58.178 20:43:41 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 56881 2 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 56881 2 busy 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@33 -- # local pid=56881 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56881 -w 256 00:16:58.178 20:43:41 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:58.437 20:43:41 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56896 root 20 0 20.1t 121544 11184 R 99.9 1.0 0:00.36 reactor_2' 00:16:58.437 20:43:41 -- interrupt/interrupt_common.sh@48 -- # echo 56896 root 20 0 20.1t 121544 11184 R 99.9 1.0 0:00.36 reactor_2 00:16:58.437 20:43:41 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:58.438 20:43:41 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:58.438 20:43:41 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:16:58.438 20:43:41 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:16:58.438 20:43:41 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:16:58.438 20:43:41 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:16:58.438 20:43:41 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:16:58.438 20:43:41 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:58.438 20:43:41 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:16:58.697 [2024-04-15 20:43:42.021412] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:16:58.697 [2024-04-15 20:43:42.022075] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:58.697 20:43:42 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:16:58.697 20:43:42 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 56881 2 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 56881 2 idle 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@33 -- # local pid=56881 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:58.697 20:43:42 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56881 -w 256 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56896 root 20 0 20.1t 121544 11184 S 0.0 1.0 0:00.57 reactor_2' 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@48 -- # echo 56896 root 20 0 20.1t 121544 11184 S 0.0 1.0 0:00.57 reactor_2 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:58.957 20:43:42 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:16:58.957 [2024-04-15 20:43:42.424876] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:16:58.957 [2024-04-15 20:43:42.425234] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:16:58.957 [2024-04-15 20:43:42.425264] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:58.957 20:43:42 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:16:58.957 20:43:42 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 56881 0 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 56881 0 idle 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@33 -- # local pid=56881 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 56881 -w 256 00:16:58.957 20:43:42 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:59.215 20:43:42 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 56881 root 20 0 20.1t 121652 11184 R 0.0 1.0 0:01.93 reactor_0' 00:16:59.215 20:43:42 -- interrupt/interrupt_common.sh@48 -- # echo 56881 root 20 0 20.1t 121652 11184 R 0.0 1.0 0:01.93 reactor_0 00:16:59.215 20:43:42 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:59.215 20:43:42 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:59.215 20:43:42 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:59.215 20:43:42 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:59.215 20:43:42 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:59.215 20:43:42 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:59.215 20:43:42 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:59.215 20:43:42 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:59.215 20:43:42 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:16:59.215 20:43:42 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:16:59.215 20:43:42 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:59.215 20:43:42 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 56881 00:16:59.215 20:43:42 -- common/autotest_common.sh@926 -- # '[' -z 56881 ']' 00:16:59.216 20:43:42 -- common/autotest_common.sh@930 -- # kill -0 56881 00:16:59.216 20:43:42 -- common/autotest_common.sh@931 -- # uname 00:16:59.216 20:43:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:59.216 20:43:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 56881 00:16:59.216 killing process with pid 56881 00:16:59.216 20:43:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:59.216 20:43:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:59.216 20:43:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 56881' 00:16:59.216 20:43:42 -- common/autotest_common.sh@945 -- # kill 56881 00:16:59.216 20:43:42 -- common/autotest_common.sh@950 -- # wait 56881 00:17:01.183 20:43:44 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:17:01.183 20:43:44 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:17:01.183 ************************************ 00:17:01.183 END TEST reactor_set_interrupt 00:17:01.183 ************************************ 00:17:01.183 00:17:01.183 real 0m12.064s 00:17:01.183 user 0m11.839s 00:17:01.183 sys 0m1.562s 00:17:01.183 20:43:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.183 20:43:44 -- common/autotest_common.sh@10 -- # set +x 00:17:01.183 20:43:44 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:17:01.183 20:43:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:01.183 20:43:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:01.184 20:43:44 -- common/autotest_common.sh@10 -- # set +x 00:17:01.184 ************************************ 00:17:01.184 START TEST reap_unregistered_poller 00:17:01.184 ************************************ 00:17:01.184 20:43:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:17:01.184 * Looking for test storage... 00:17:01.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:17:01.184 20:43:44 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:17:01.184 20:43:44 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:17:01.184 20:43:44 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:17:01.184 20:43:44 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:17:01.184 20:43:44 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:17:01.184 20:43:44 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:01.184 20:43:44 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:17:01.184 20:43:44 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:17:01.184 20:43:44 -- common/autotest_common.sh@34 -- # set -e 00:17:01.184 20:43:44 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:17:01.184 20:43:44 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:17:01.184 20:43:44 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:17:01.184 20:43:44 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:17:01.184 20:43:44 -- common/build_config.sh@1 -- # CONFIG_RDMA=y 00:17:01.184 20:43:44 -- common/build_config.sh@2 -- # CONFIG_UNIT_TESTS=y 00:17:01.184 20:43:44 -- common/build_config.sh@3 -- # CONFIG_GOLANG=n 00:17:01.184 20:43:44 -- common/build_config.sh@4 -- # CONFIG_FUSE=n 00:17:01.184 20:43:44 -- common/build_config.sh@5 -- # CONFIG_ISAL=n 00:17:01.184 20:43:44 -- common/build_config.sh@6 -- # CONFIG_VTUNE_DIR= 00:17:01.184 20:43:44 -- common/build_config.sh@7 -- # CONFIG_CUSTOMOCF=n 00:17:01.184 20:43:44 -- common/build_config.sh@8 -- # CONFIG_IPSEC_MB_DIR= 00:17:01.184 20:43:44 -- common/build_config.sh@9 -- # CONFIG_VBDEV_COMPRESS=n 00:17:01.184 20:43:44 -- common/build_config.sh@10 -- # CONFIG_OCF_PATH= 00:17:01.184 20:43:44 -- common/build_config.sh@11 -- # CONFIG_SHARED=n 00:17:01.184 20:43:44 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:17:01.184 20:43:44 -- common/build_config.sh@13 -- # CONFIG_TESTS=y 00:17:01.184 20:43:44 -- common/build_config.sh@14 -- # CONFIG_APPS=y 00:17:01.184 20:43:44 -- common/build_config.sh@15 -- # CONFIG_ISAL_CRYPTO=n 00:17:01.184 20:43:44 -- common/build_config.sh@16 -- # CONFIG_LIBDIR= 00:17:01.184 20:43:44 -- common/build_config.sh@17 -- # CONFIG_DPDK_COMPRESSDEV=n 00:17:01.184 20:43:44 -- common/build_config.sh@18 -- # CONFIG_DAOS_DIR= 00:17:01.184 20:43:44 -- common/build_config.sh@19 -- # CONFIG_ISCSI_INITIATOR=n 00:17:01.184 20:43:44 -- common/build_config.sh@20 -- # CONFIG_DPDK_PKG_CONFIG=n 00:17:01.184 20:43:44 -- common/build_config.sh@21 -- # CONFIG_ASAN=y 00:17:01.184 20:43:44 -- common/build_config.sh@22 -- # CONFIG_LTO=n 00:17:01.184 20:43:44 -- common/build_config.sh@23 -- # CONFIG_CET=n 00:17:01.184 20:43:44 -- common/build_config.sh@24 -- # CONFIG_FUZZER=n 00:17:01.184 20:43:44 -- common/build_config.sh@25 -- # CONFIG_USDT=n 00:17:01.184 20:43:44 -- common/build_config.sh@26 -- # CONFIG_VTUNE=n 00:17:01.184 20:43:44 -- common/build_config.sh@27 -- # CONFIG_VHOST=y 00:17:01.184 20:43:44 -- common/build_config.sh@28 -- # CONFIG_WPDK_DIR= 00:17:01.184 20:43:44 -- common/build_config.sh@29 -- # CONFIG_UBLK=n 00:17:01.184 20:43:44 -- common/build_config.sh@30 -- # CONFIG_URING=n 00:17:01.184 20:43:44 -- common/build_config.sh@31 -- # CONFIG_SMA=n 00:17:01.184 20:43:44 -- common/build_config.sh@32 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:17:01.184 20:43:44 -- common/build_config.sh@33 -- # CONFIG_IDXD_KERNEL=n 00:17:01.184 20:43:44 -- common/build_config.sh@34 -- # CONFIG_FC_PATH= 00:17:01.184 20:43:44 -- common/build_config.sh@35 -- # CONFIG_PREFIX=/usr/local 00:17:01.184 20:43:44 -- common/build_config.sh@36 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:17:01.184 20:43:44 -- common/build_config.sh@37 -- # CONFIG_XNVME=n 00:17:01.184 20:43:44 -- common/build_config.sh@38 -- # CONFIG_RDMA_PROV=verbs 00:17:01.184 20:43:44 -- common/build_config.sh@39 -- # CONFIG_RDMA_SET_TOS=y 00:17:01.184 20:43:44 -- common/build_config.sh@40 -- # CONFIG_FUZZER_LIB= 00:17:01.184 20:43:44 -- common/build_config.sh@41 -- # CONFIG_HAVE_LIBARCHIVE=n 00:17:01.184 20:43:44 -- common/build_config.sh@42 -- # CONFIG_ARCH=native 00:17:01.184 20:43:44 -- common/build_config.sh@43 -- # CONFIG_PGO_CAPTURE=n 00:17:01.184 20:43:44 -- common/build_config.sh@44 -- # CONFIG_DAOS=y 00:17:01.184 20:43:44 -- common/build_config.sh@45 -- # CONFIG_WERROR=y 00:17:01.184 20:43:44 -- common/build_config.sh@46 -- # CONFIG_DEBUG=y 00:17:01.184 20:43:44 -- common/build_config.sh@47 -- # CONFIG_AVAHI=n 00:17:01.184 20:43:44 -- common/build_config.sh@48 -- # CONFIG_CROSS_PREFIX= 00:17:01.184 20:43:44 -- common/build_config.sh@49 -- # CONFIG_PGO_USE=n 00:17:01.184 20:43:44 -- common/build_config.sh@50 -- # CONFIG_CRYPTO=n 00:17:01.184 20:43:44 -- common/build_config.sh@51 -- # CONFIG_HAVE_ARC4RANDOM=n 00:17:01.184 20:43:44 -- common/build_config.sh@52 -- # CONFIG_OPENSSL_PATH= 00:17:01.184 20:43:44 -- common/build_config.sh@53 -- # CONFIG_EXAMPLES=y 00:17:01.184 20:43:44 -- common/build_config.sh@54 -- # CONFIG_DPDK_INC_DIR= 00:17:01.184 20:43:44 -- common/build_config.sh@55 -- # CONFIG_MAX_LCORES= 00:17:01.184 20:43:44 -- common/build_config.sh@56 -- # CONFIG_VIRTIO=y 00:17:01.184 20:43:44 -- common/build_config.sh@57 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:01.184 20:43:44 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB=n 00:17:01.184 20:43:44 -- common/build_config.sh@59 -- # CONFIG_UBSAN=n 00:17:01.184 20:43:44 -- common/build_config.sh@60 -- # CONFIG_HAVE_EXECINFO_H=y 00:17:01.184 20:43:44 -- common/build_config.sh@61 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:17:01.184 20:43:44 -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:17:01.184 20:43:44 -- common/build_config.sh@63 -- # CONFIG_URING_PATH= 00:17:01.184 20:43:44 -- common/build_config.sh@64 -- # CONFIG_NVME_CUSE=y 00:17:01.184 20:43:44 -- common/build_config.sh@65 -- # CONFIG_URING_ZNS=n 00:17:01.184 20:43:44 -- common/build_config.sh@66 -- # CONFIG_VFIO_USER=n 00:17:01.184 20:43:44 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:17:01.184 20:43:44 -- common/build_config.sh@68 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:17:01.184 20:43:44 -- common/build_config.sh@69 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:17:01.184 20:43:44 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:17:01.184 20:43:44 -- common/build_config.sh@71 -- # CONFIG_RAID5F=n 00:17:01.184 20:43:44 -- common/build_config.sh@72 -- # CONFIG_VFIO_USER_DIR= 00:17:01.184 20:43:44 -- common/build_config.sh@73 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:17:01.184 20:43:44 -- common/build_config.sh@74 -- # CONFIG_TSAN=n 00:17:01.184 20:43:44 -- common/build_config.sh@75 -- # CONFIG_IDXD=y 00:17:01.184 20:43:44 -- common/build_config.sh@76 -- # CONFIG_OCF=n 00:17:01.184 20:43:44 -- common/build_config.sh@77 -- # CONFIG_CRYPTO_MLX5=n 00:17:01.184 20:43:44 -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:17:01.184 20:43:44 -- common/build_config.sh@79 -- # CONFIG_COVERAGE=y 00:17:01.184 20:43:44 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:17:01.184 20:43:44 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:17:01.184 20:43:44 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:17:01.184 20:43:44 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:17:01.184 20:43:44 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:17:01.184 20:43:44 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:17:01.184 20:43:44 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:17:01.184 20:43:44 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:17:01.184 20:43:44 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:17:01.184 20:43:44 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:17:01.184 20:43:44 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:17:01.184 20:43:44 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:17:01.184 20:43:44 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:17:01.184 20:43:44 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:17:01.184 20:43:44 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:17:01.184 20:43:44 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:17:01.184 #define SPDK_CONFIG_H 00:17:01.184 #define SPDK_CONFIG_APPS 1 00:17:01.184 #define SPDK_CONFIG_ARCH native 00:17:01.184 #define SPDK_CONFIG_ASAN 1 00:17:01.184 #undef SPDK_CONFIG_AVAHI 00:17:01.184 #undef SPDK_CONFIG_CET 00:17:01.184 #define SPDK_CONFIG_COVERAGE 1 00:17:01.184 #define SPDK_CONFIG_CROSS_PREFIX 00:17:01.184 #undef SPDK_CONFIG_CRYPTO 00:17:01.184 #undef SPDK_CONFIG_CRYPTO_MLX5 00:17:01.184 #undef SPDK_CONFIG_CUSTOMOCF 00:17:01.184 #define SPDK_CONFIG_DAOS 1 00:17:01.184 #define SPDK_CONFIG_DAOS_DIR 00:17:01.184 #define SPDK_CONFIG_DEBUG 1 00:17:01.184 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:17:01.184 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:17:01.184 #define SPDK_CONFIG_DPDK_INC_DIR 00:17:01.185 #define SPDK_CONFIG_DPDK_LIB_DIR 00:17:01.185 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:17:01.185 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:01.185 #define SPDK_CONFIG_EXAMPLES 1 00:17:01.185 #undef SPDK_CONFIG_FC 00:17:01.185 #define SPDK_CONFIG_FC_PATH 00:17:01.185 #define SPDK_CONFIG_FIO_PLUGIN 1 00:17:01.185 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:17:01.185 #undef SPDK_CONFIG_FUSE 00:17:01.185 #undef SPDK_CONFIG_FUZZER 00:17:01.185 #define SPDK_CONFIG_FUZZER_LIB 00:17:01.185 #undef SPDK_CONFIG_GOLANG 00:17:01.185 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:17:01.185 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:17:01.185 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:17:01.185 #undef SPDK_CONFIG_HAVE_LIBBSD 00:17:01.185 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:17:01.185 #define SPDK_CONFIG_IDXD 1 00:17:01.185 #undef SPDK_CONFIG_IDXD_KERNEL 00:17:01.185 #undef SPDK_CONFIG_IPSEC_MB 00:17:01.185 #define SPDK_CONFIG_IPSEC_MB_DIR 00:17:01.185 #undef SPDK_CONFIG_ISAL 00:17:01.185 #undef SPDK_CONFIG_ISAL_CRYPTO 00:17:01.185 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:17:01.185 #define SPDK_CONFIG_LIBDIR 00:17:01.185 #undef SPDK_CONFIG_LTO 00:17:01.185 #define SPDK_CONFIG_MAX_LCORES 00:17:01.185 #define SPDK_CONFIG_NVME_CUSE 1 00:17:01.185 #undef SPDK_CONFIG_OCF 00:17:01.185 #define SPDK_CONFIG_OCF_PATH 00:17:01.185 #define SPDK_CONFIG_OPENSSL_PATH 00:17:01.185 #undef SPDK_CONFIG_PGO_CAPTURE 00:17:01.185 #undef SPDK_CONFIG_PGO_USE 00:17:01.185 #define SPDK_CONFIG_PREFIX /usr/local 00:17:01.185 #undef SPDK_CONFIG_RAID5F 00:17:01.185 #undef SPDK_CONFIG_RBD 00:17:01.185 #define SPDK_CONFIG_RDMA 1 00:17:01.185 #define SPDK_CONFIG_RDMA_PROV verbs 00:17:01.185 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:17:01.185 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:17:01.185 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:17:01.185 #undef SPDK_CONFIG_SHARED 00:17:01.185 #undef SPDK_CONFIG_SMA 00:17:01.185 #define SPDK_CONFIG_TESTS 1 00:17:01.185 #undef SPDK_CONFIG_TSAN 00:17:01.185 #undef SPDK_CONFIG_UBLK 00:17:01.185 #undef SPDK_CONFIG_UBSAN 00:17:01.185 #define SPDK_CONFIG_UNIT_TESTS 1 00:17:01.185 #undef SPDK_CONFIG_URING 00:17:01.185 #define SPDK_CONFIG_URING_PATH 00:17:01.185 #undef SPDK_CONFIG_URING_ZNS 00:17:01.185 #undef SPDK_CONFIG_USDT 00:17:01.185 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:17:01.185 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:17:01.185 #undef SPDK_CONFIG_VFIO_USER 00:17:01.185 #define SPDK_CONFIG_VFIO_USER_DIR 00:17:01.185 #define SPDK_CONFIG_VHOST 1 00:17:01.185 #define SPDK_CONFIG_VIRTIO 1 00:17:01.185 #undef SPDK_CONFIG_VTUNE 00:17:01.185 #define SPDK_CONFIG_VTUNE_DIR 00:17:01.185 #define SPDK_CONFIG_WERROR 1 00:17:01.185 #define SPDK_CONFIG_WPDK_DIR 00:17:01.185 #undef SPDK_CONFIG_XNVME 00:17:01.185 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:17:01.185 20:43:44 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:17:01.185 20:43:44 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.185 20:43:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.185 20:43:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.185 20:43:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.185 20:43:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:01.185 20:43:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:01.185 20:43:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:01.185 20:43:44 -- paths/export.sh@5 -- # export PATH 00:17:01.185 20:43:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:01.185 20:43:44 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:17:01.185 20:43:44 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:17:01.185 20:43:44 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:17:01.185 20:43:44 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:17:01.185 20:43:44 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:17:01.185 20:43:44 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:17:01.185 20:43:44 -- pm/common@16 -- # TEST_TAG=N/A 00:17:01.185 20:43:44 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:17:01.185 20:43:44 -- common/autotest_common.sh@52 -- # : 1 00:17:01.185 20:43:44 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:17:01.185 20:43:44 -- common/autotest_common.sh@56 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:17:01.185 20:43:44 -- common/autotest_common.sh@58 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:17:01.185 20:43:44 -- common/autotest_common.sh@60 -- # : 1 00:17:01.185 20:43:44 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:17:01.185 20:43:44 -- common/autotest_common.sh@62 -- # : 1 00:17:01.185 20:43:44 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:17:01.185 20:43:44 -- common/autotest_common.sh@64 -- # : 00:17:01.185 20:43:44 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:17:01.185 20:43:44 -- common/autotest_common.sh@66 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:17:01.185 20:43:44 -- common/autotest_common.sh@68 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:17:01.185 20:43:44 -- common/autotest_common.sh@70 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:17:01.185 20:43:44 -- common/autotest_common.sh@72 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:17:01.185 20:43:44 -- common/autotest_common.sh@74 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:17:01.185 20:43:44 -- common/autotest_common.sh@76 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:17:01.185 20:43:44 -- common/autotest_common.sh@78 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:17:01.185 20:43:44 -- common/autotest_common.sh@80 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:17:01.185 20:43:44 -- common/autotest_common.sh@82 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:17:01.185 20:43:44 -- common/autotest_common.sh@84 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:17:01.185 20:43:44 -- common/autotest_common.sh@86 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:17:01.185 20:43:44 -- common/autotest_common.sh@88 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:17:01.185 20:43:44 -- common/autotest_common.sh@90 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:17:01.185 20:43:44 -- common/autotest_common.sh@92 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:17:01.185 20:43:44 -- common/autotest_common.sh@94 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:17:01.185 20:43:44 -- common/autotest_common.sh@96 -- # : rdma 00:17:01.185 20:43:44 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:17:01.185 20:43:44 -- common/autotest_common.sh@98 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:17:01.185 20:43:44 -- common/autotest_common.sh@100 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:17:01.185 20:43:44 -- common/autotest_common.sh@102 -- # : 1 00:17:01.185 20:43:44 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:17:01.185 20:43:44 -- common/autotest_common.sh@104 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:17:01.185 20:43:44 -- common/autotest_common.sh@106 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:17:01.185 20:43:44 -- common/autotest_common.sh@108 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:17:01.185 20:43:44 -- common/autotest_common.sh@110 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:17:01.185 20:43:44 -- common/autotest_common.sh@112 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:17:01.185 20:43:44 -- common/autotest_common.sh@114 -- # : 1 00:17:01.185 20:43:44 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:17:01.185 20:43:44 -- common/autotest_common.sh@116 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:17:01.185 20:43:44 -- common/autotest_common.sh@118 -- # : 00:17:01.185 20:43:44 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:17:01.185 20:43:44 -- common/autotest_common.sh@120 -- # : 0 00:17:01.185 20:43:44 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:17:01.185 20:43:44 -- common/autotest_common.sh@122 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:17:01.186 20:43:44 -- common/autotest_common.sh@124 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:17:01.186 20:43:44 -- common/autotest_common.sh@126 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:17:01.186 20:43:44 -- common/autotest_common.sh@128 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:17:01.186 20:43:44 -- common/autotest_common.sh@130 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:17:01.186 20:43:44 -- common/autotest_common.sh@132 -- # : 00:17:01.186 20:43:44 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:17:01.186 20:43:44 -- common/autotest_common.sh@134 -- # : true 00:17:01.186 20:43:44 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:17:01.186 20:43:44 -- common/autotest_common.sh@136 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:17:01.186 20:43:44 -- common/autotest_common.sh@138 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:17:01.186 20:43:44 -- common/autotest_common.sh@140 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:17:01.186 20:43:44 -- common/autotest_common.sh@142 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:17:01.186 20:43:44 -- common/autotest_common.sh@144 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:17:01.186 20:43:44 -- common/autotest_common.sh@146 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:17:01.186 20:43:44 -- common/autotest_common.sh@148 -- # : 00:17:01.186 20:43:44 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:17:01.186 20:43:44 -- common/autotest_common.sh@150 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:17:01.186 20:43:44 -- common/autotest_common.sh@152 -- # : 1 00:17:01.186 20:43:44 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:17:01.186 20:43:44 -- common/autotest_common.sh@154 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:17:01.186 20:43:44 -- common/autotest_common.sh@156 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:17:01.186 20:43:44 -- common/autotest_common.sh@158 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:17:01.186 20:43:44 -- common/autotest_common.sh@160 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:17:01.186 20:43:44 -- common/autotest_common.sh@163 -- # : 00:17:01.186 20:43:44 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:17:01.186 20:43:44 -- common/autotest_common.sh@165 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:17:01.186 20:43:44 -- common/autotest_common.sh@167 -- # : 0 00:17:01.186 20:43:44 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:17:01.186 20:43:44 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:17:01.186 20:43:44 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:17:01.186 20:43:44 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:17:01.186 20:43:44 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:17:01.186 20:43:44 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:01.186 20:43:44 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:01.186 20:43:44 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:01.186 20:43:44 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:01.186 20:43:44 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:17:01.186 20:43:44 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:17:01.186 20:43:44 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:01.186 20:43:44 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:01.186 20:43:44 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:17:01.186 20:43:44 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:17:01.186 20:43:44 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:01.186 20:43:44 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:01.186 20:43:44 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:01.186 20:43:44 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:01.186 20:43:44 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:17:01.186 20:43:44 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:17:01.186 20:43:44 -- common/autotest_common.sh@196 -- # cat 00:17:01.186 20:43:44 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:17:01.186 20:43:44 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:01.186 20:43:44 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:01.186 20:43:44 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:01.186 20:43:44 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:01.186 20:43:44 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:17:01.186 20:43:44 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:17:01.186 20:43:44 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:17:01.186 20:43:44 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:17:01.186 20:43:44 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:17:01.186 20:43:44 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:17:01.186 20:43:44 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:17:01.186 20:43:44 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:17:01.186 20:43:44 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:17:01.186 20:43:44 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:17:01.186 20:43:44 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:17:01.186 20:43:44 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:17:01.187 20:43:44 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:01.187 20:43:44 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:01.187 20:43:44 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:17:01.187 20:43:44 -- common/autotest_common.sh@249 -- # export valgrind= 00:17:01.187 20:43:44 -- common/autotest_common.sh@249 -- # valgrind= 00:17:01.187 20:43:44 -- common/autotest_common.sh@255 -- # uname -s 00:17:01.187 20:43:44 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:17:01.187 20:43:44 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:17:01.187 20:43:44 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:17:01.187 20:43:44 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:17:01.187 20:43:44 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:17:01.187 20:43:44 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:17:01.187 20:43:44 -- common/autotest_common.sh@265 -- # MAKE=make 00:17:01.187 20:43:44 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:17:01.187 20:43:44 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:17:01.187 20:43:44 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:17:01.187 20:43:44 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:17:01.187 20:43:44 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:17:01.187 20:43:44 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:17:01.187 20:43:44 -- common/autotest_common.sh@309 -- # [[ -z 57069 ]] 00:17:01.187 20:43:44 -- common/autotest_common.sh@309 -- # kill -0 57069 00:17:01.187 20:43:44 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:17:01.187 20:43:44 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:17:01.187 20:43:44 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:17:01.187 20:43:44 -- common/autotest_common.sh@322 -- # local mount target_dir 00:17:01.187 20:43:44 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:17:01.187 20:43:44 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:17:01.187 20:43:44 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:17:01.187 20:43:44 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:17:01.187 20:43:44 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.Sa8vVj 00:17:01.187 20:43:44 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:17:01.187 20:43:44 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:17:01.187 20:43:44 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:17:01.187 20:43:44 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.Sa8vVj/tests/interrupt /tmp/spdk.Sa8vVj 00:17:01.187 20:43:44 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:17:01.187 20:43:44 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:17:01.187 20:43:44 -- common/autotest_common.sh@318 -- # df -T 00:17:01.187 20:43:44 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267637760 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267637760 00:17:01.187 20:43:44 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:17:01.187 20:43:44 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # avails["$mount"]=6295592960 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:17:01.187 20:43:44 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:17:01.187 20:43:44 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # avails["$mount"]=6277238784 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:17:01.187 20:43:44 -- common/autotest_common.sh@354 -- # uses["$mount"]=20946944 00:17:01.187 20:43:44 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # avails["$mount"]=6298185728 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6298185728 00:17:01.187 20:43:44 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:17:01.187 20:43:44 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # fss["$mount"]=xfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # avails["$mount"]=14369124352 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # sizes["$mount"]=21463302144 00:17:01.187 20:43:44 -- common/autotest_common.sh@354 -- # uses["$mount"]=7094177792 00:17:01.187 20:43:44 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # avails["$mount"]=1259638784 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1259638784 00:17:01.187 20:43:44 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:17:01.187 20:43:44 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/centos7-vg-autotest/centos7-libvirt/output 00:17:01.187 20:43:44 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # avails["$mount"]=93640630272 00:17:01.187 20:43:44 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:17:01.187 20:43:44 -- common/autotest_common.sh@354 -- # uses["$mount"]=6062149632 00:17:01.187 20:43:44 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:17:01.187 20:43:44 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:17:01.187 * Looking for test storage... 00:17:01.187 20:43:44 -- common/autotest_common.sh@359 -- # local target_space new_size 00:17:01.187 20:43:44 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:17:01.187 20:43:44 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:17:01.187 20:43:44 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:17:01.187 20:43:44 -- common/autotest_common.sh@363 -- # mount=/ 00:17:01.187 20:43:44 -- common/autotest_common.sh@365 -- # target_space=14369124352 00:17:01.187 20:43:44 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:17:01.188 20:43:44 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:17:01.188 20:43:44 -- common/autotest_common.sh@371 -- # [[ xfs == tmpfs ]] 00:17:01.188 20:43:44 -- common/autotest_common.sh@371 -- # [[ xfs == ramfs ]] 00:17:01.188 20:43:44 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:17:01.188 20:43:44 -- common/autotest_common.sh@372 -- # new_size=9308770304 00:17:01.188 20:43:44 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:17:01.188 20:43:44 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:17:01.188 20:43:44 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:17:01.188 20:43:44 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:17:01.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:17:01.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.188 20:43:44 -- common/autotest_common.sh@380 -- # return 0 00:17:01.188 20:43:44 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:17:01.188 20:43:44 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:17:01.188 20:43:44 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:17:01.188 20:43:44 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:17:01.188 20:43:44 -- common/autotest_common.sh@1672 -- # true 00:17:01.188 20:43:44 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:17:01.188 20:43:44 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:17:01.188 20:43:44 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:17:01.188 20:43:44 -- common/autotest_common.sh@27 -- # exec 00:17:01.188 20:43:44 -- common/autotest_common.sh@29 -- # exec 00:17:01.188 20:43:44 -- common/autotest_common.sh@31 -- # xtrace_restore 00:17:01.188 20:43:44 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:17:01.188 20:43:44 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:17:01.188 20:43:44 -- common/autotest_common.sh@18 -- # set -x 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:17:01.188 20:43:44 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:17:01.188 20:43:44 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:17:01.188 20:43:44 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=57116 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 57116 /var/tmp/spdk.sock 00:17:01.188 20:43:44 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:17:01.188 20:43:44 -- common/autotest_common.sh@819 -- # '[' -z 57116 ']' 00:17:01.188 20:43:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.188 20:43:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.188 20:43:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.188 20:43:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.188 20:43:44 -- common/autotest_common.sh@10 -- # set +x 00:17:01.448 [2024-04-15 20:43:44.778536] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:01.448 [2024-04-15 20:43:44.778864] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57116 ] 00:17:01.448 [2024-04-15 20:43:44.942251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.707 [2024-04-15 20:43:45.155375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.707 [2024-04-15 20:43:45.155535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.707 [2024-04-15 20:43:45.155537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.274 [2024-04-15 20:43:45.494897] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:02.274 20:43:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:02.274 20:43:45 -- common/autotest_common.sh@852 -- # return 0 00:17:02.274 20:43:45 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:17:02.274 20:43:45 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:17:02.274 20:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.274 20:43:45 -- common/autotest_common.sh@10 -- # set +x 00:17:02.274 20:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.274 20:43:45 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:17:02.274 "name": "app_thread", 00:17:02.274 "id": 1, 00:17:02.274 "active_pollers": [], 00:17:02.274 "timed_pollers": [ 00:17:02.274 { 00:17:02.274 "name": "rpc_subsystem_poll", 00:17:02.274 "id": 1, 00:17:02.274 "state": "waiting", 00:17:02.274 "run_count": 0, 00:17:02.274 "busy_count": 0, 00:17:02.274 "period_ticks": 9960000 00:17:02.274 } 00:17:02.274 ], 00:17:02.274 "paused_pollers": [] 00:17:02.274 }' 00:17:02.274 20:43:45 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:17:02.274 20:43:45 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:17:02.274 20:43:45 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:17:02.274 20:43:45 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:17:02.274 20:43:45 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:17:02.274 20:43:45 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:17:02.274 20:43:45 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:17:02.274 20:43:45 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:17:02.274 20:43:45 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:17:02.274 5000+0 records in 00:17:02.274 5000+0 records out 00:17:02.274 10240000 bytes (10 MB) copied, 0.017589 s, 582 MB/s 00:17:02.274 20:43:45 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:17:02.533 AIO0 00:17:02.533 20:43:45 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:02.791 20:43:46 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:17:02.791 20:43:46 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:17:02.791 20:43:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.791 20:43:46 -- common/autotest_common.sh@10 -- # set +x 00:17:02.791 20:43:46 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:17:02.791 20:43:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.791 20:43:46 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:17:02.791 "name": "app_thread", 00:17:02.791 "id": 1, 00:17:02.791 "active_pollers": [], 00:17:02.791 "timed_pollers": [ 00:17:02.791 { 00:17:02.791 "name": "rpc_subsystem_poll", 00:17:02.791 "id": 1, 00:17:02.791 "state": "waiting", 00:17:02.791 "run_count": 0, 00:17:02.791 "busy_count": 0, 00:17:02.791 "period_ticks": 9960000 00:17:02.791 } 00:17:02.791 ], 00:17:02.791 "paused_pollers": [] 00:17:02.791 }' 00:17:02.791 20:43:46 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:17:02.791 20:43:46 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:17:02.792 20:43:46 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:17:02.792 20:43:46 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:17:03.050 20:43:46 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:17:03.050 20:43:46 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:17:03.050 20:43:46 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:03.050 20:43:46 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 57116 00:17:03.050 20:43:46 -- common/autotest_common.sh@926 -- # '[' -z 57116 ']' 00:17:03.050 20:43:46 -- common/autotest_common.sh@930 -- # kill -0 57116 00:17:03.050 20:43:46 -- common/autotest_common.sh@931 -- # uname 00:17:03.050 20:43:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:03.050 20:43:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 57116 00:17:03.050 killing process with pid 57116 00:17:03.050 20:43:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:03.050 20:43:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:03.050 20:43:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 57116' 00:17:03.050 20:43:46 -- common/autotest_common.sh@945 -- # kill 57116 00:17:03.050 20:43:46 -- common/autotest_common.sh@950 -- # wait 57116 00:17:04.428 20:43:47 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:17:04.428 20:43:47 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:17:04.428 00:17:04.428 real 0m3.232s 00:17:04.428 user 0m2.814s 00:17:04.428 sys 0m0.497s 00:17:04.428 20:43:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.428 20:43:47 -- common/autotest_common.sh@10 -- # set +x 00:17:04.428 ************************************ 00:17:04.428 END TEST reap_unregistered_poller 00:17:04.428 ************************************ 00:17:04.428 20:43:47 -- spdk/autotest.sh@204 -- # uname -s 00:17:04.428 20:43:47 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:17:04.428 20:43:47 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:17:04.428 20:43:47 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:17:04.428 20:43:47 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:17:04.428 20:43:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:04.428 20:43:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:04.428 20:43:47 -- common/autotest_common.sh@10 -- # set +x 00:17:04.428 ************************************ 00:17:04.428 START TEST spdk_dd 00:17:04.428 ************************************ 00:17:04.428 20:43:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:17:04.428 * Looking for test storage... 00:17:04.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:17:04.428 20:43:47 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.428 20:43:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.428 20:43:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.428 20:43:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.428 20:43:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:04.428 20:43:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:04.428 20:43:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:04.428 20:43:47 -- paths/export.sh@5 -- # export PATH 00:17:04.428 20:43:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:04.428 20:43:47 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:04.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:17:04.687 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:04.687 20:43:48 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:17:04.687 20:43:48 -- dd/dd.sh@11 -- # nvme_in_userspace 00:17:04.687 20:43:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:17:04.687 20:43:48 -- scripts/common.sh@312 -- # local nvmes 00:17:04.687 20:43:48 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:17:04.687 20:43:48 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:17:04.687 20:43:48 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:17:04.687 20:43:48 -- scripts/common.sh@297 -- # local bdf= 00:17:04.687 20:43:48 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:17:04.687 20:43:48 -- scripts/common.sh@232 -- # local class 00:17:04.687 20:43:48 -- scripts/common.sh@233 -- # local subclass 00:17:04.687 20:43:48 -- scripts/common.sh@234 -- # local progif 00:17:04.688 20:43:48 -- scripts/common.sh@235 -- # printf %02x 1 00:17:04.688 20:43:48 -- scripts/common.sh@235 -- # class=01 00:17:04.688 20:43:48 -- scripts/common.sh@236 -- # printf %02x 8 00:17:04.688 20:43:48 -- scripts/common.sh@236 -- # subclass=08 00:17:04.688 20:43:48 -- scripts/common.sh@237 -- # printf %02x 2 00:17:04.688 20:43:48 -- scripts/common.sh@237 -- # progif=02 00:17:04.688 20:43:48 -- scripts/common.sh@239 -- # hash lspci 00:17:04.688 20:43:48 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:17:04.688 20:43:48 -- scripts/common.sh@242 -- # grep -i -- -p02 00:17:04.688 20:43:48 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:17:04.688 20:43:48 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:17:04.688 20:43:48 -- scripts/common.sh@244 -- # tr -d '"' 00:17:04.688 20:43:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:17:04.688 20:43:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:17:04.688 20:43:48 -- scripts/common.sh@15 -- # local i 00:17:04.688 20:43:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:17:04.688 20:43:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:17:04.688 20:43:48 -- scripts/common.sh@24 -- # return 0 00:17:04.688 20:43:48 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:17:04.688 20:43:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:17:04.688 20:43:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:17:04.688 20:43:48 -- scripts/common.sh@322 -- # uname -s 00:17:04.688 20:43:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:17:04.688 20:43:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:17:04.688 20:43:48 -- scripts/common.sh@327 -- # (( 1 )) 00:17:04.688 20:43:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:17:04.688 20:43:48 -- dd/dd.sh@13 -- # check_liburing 00:17:04.688 20:43:48 -- dd/common.sh@139 -- # local lib so 00:17:04.688 20:43:48 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:17:04.688 20:43:48 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libdaos.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libdaos_common.so == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libdfs.so == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libgurt.so.4 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libz.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libisal.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libcart.so.4 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ liblz4.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libprotobuf-c.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libyaml-0.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libmercury_hl.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libmercury.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libmercury_util.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libna.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libfabric.so.1 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/common.sh@143 -- # [[ libpsm2.so.2 == liburing.so.* ]] 00:17:04.688 20:43:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:17:04.688 20:43:48 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:17:04.688 20:43:48 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:17:04.688 20:43:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:04.688 20:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:04.688 20:43:48 -- common/autotest_common.sh@10 -- # set +x 00:17:04.688 ************************************ 00:17:04.688 START TEST spdk_dd_basic_rw 00:17:04.688 ************************************ 00:17:04.688 20:43:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:17:04.947 * Looking for test storage... 00:17:04.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:17:04.947 20:43:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.947 20:43:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.947 20:43:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.947 20:43:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.947 20:43:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:04.947 20:43:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:04.947 20:43:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:04.947 20:43:48 -- paths/export.sh@5 -- # export PATH 00:17:04.947 20:43:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:04.947 20:43:48 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:17:04.947 20:43:48 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:17:04.947 20:43:48 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:17:04.947 20:43:48 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:17:04.947 20:43:48 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:17:04.947 20:43:48 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:17:04.947 20:43:48 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:17:04.947 20:43:48 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:04.947 20:43:48 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:04.947 20:43:48 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:17:04.947 20:43:48 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:17:04.947 20:43:48 -- dd/common.sh@126 -- # mapfile -t id 00:17:04.947 20:43:48 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:17:05.208 20:43:48 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 80 Data Units Written: 204 Host Read Commands: 1608 Host Write Commands: 308 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:17:05.208 20:43:48 -- dd/common.sh@130 -- # lbaf=04 00:17:05.208 20:43:48 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 80 Data Units Written: 204 Host Read Commands: 1608 Host Write Commands: 308 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:17:05.208 20:43:48 -- dd/common.sh@132 -- # lbaf=4096 00:17:05.208 20:43:48 -- dd/common.sh@134 -- # echo 4096 00:17:05.208 20:43:48 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:17:05.208 20:43:48 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:17:05.208 20:43:48 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:17:05.208 20:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:05.208 20:43:48 -- common/autotest_common.sh@10 -- # set +x 00:17:05.208 20:43:48 -- dd/basic_rw.sh@96 -- # : 00:17:05.208 20:43:48 -- dd/basic_rw.sh@96 -- # gen_conf 00:17:05.208 20:43:48 -- dd/common.sh@31 -- # xtrace_disable 00:17:05.208 20:43:48 -- common/autotest_common.sh@10 -- # set +x 00:17:05.208 ************************************ 00:17:05.208 START TEST dd_bs_lt_native_bs 00:17:05.208 ************************************ 00:17:05.208 20:43:48 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:17:05.208 20:43:48 -- common/autotest_common.sh@640 -- # local es=0 00:17:05.208 20:43:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:17:05.208 20:43:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:05.208 20:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:05.208 20:43:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:05.208 20:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:05.208 20:43:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:05.208 20:43:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:05.208 20:43:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:05.209 20:43:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:05.209 20:43:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:17:05.209 { 00:17:05.209 "subsystems": [ 00:17:05.209 { 00:17:05.209 "subsystem": "bdev", 00:17:05.209 "config": [ 00:17:05.209 { 00:17:05.209 "params": { 00:17:05.209 "trtype": "pcie", 00:17:05.209 "name": "Nvme0", 00:17:05.209 "traddr": "0000:00:06.0" 00:17:05.209 }, 00:17:05.209 "method": "bdev_nvme_attach_controller" 00:17:05.209 }, 00:17:05.209 { 00:17:05.209 "method": "bdev_wait_for_examine" 00:17:05.209 } 00:17:05.209 ] 00:17:05.209 } 00:17:05.209 ] 00:17:05.209 } 00:17:05.468 [2024-04-15 20:43:48.802501] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:05.468 [2024-04-15 20:43:48.802852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57391 ] 00:17:05.468 [2024-04-15 20:43:48.958360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.727 [2024-04-15 20:43:49.154037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.295 [2024-04-15 20:43:49.574195] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:17:06.295 [2024-04-15 20:43:49.574296] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:07.228 [2024-04-15 20:43:50.442391] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:17:07.488 ************************************ 00:17:07.488 END TEST dd_bs_lt_native_bs 00:17:07.488 ************************************ 00:17:07.488 20:43:50 -- common/autotest_common.sh@643 -- # es=234 00:17:07.488 20:43:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:07.488 20:43:50 -- common/autotest_common.sh@652 -- # es=106 00:17:07.488 20:43:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:17:07.488 20:43:50 -- common/autotest_common.sh@660 -- # es=1 00:17:07.488 20:43:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:07.488 00:17:07.488 real 0m2.178s 00:17:07.488 user 0m1.804s 00:17:07.488 sys 0m0.244s 00:17:07.488 20:43:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.488 20:43:50 -- common/autotest_common.sh@10 -- # set +x 00:17:07.488 20:43:50 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:17:07.488 20:43:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:07.488 20:43:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:07.488 20:43:50 -- common/autotest_common.sh@10 -- # set +x 00:17:07.488 ************************************ 00:17:07.488 START TEST dd_rw 00:17:07.488 ************************************ 00:17:07.488 20:43:50 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:17:07.488 20:43:50 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:17:07.488 20:43:50 -- dd/basic_rw.sh@12 -- # local count size 00:17:07.488 20:43:50 -- dd/basic_rw.sh@13 -- # local qds bss 00:17:07.488 20:43:50 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:17:07.488 20:43:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:17:07.488 20:43:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:17:07.488 20:43:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:17:07.488 20:43:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:17:07.488 20:43:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:17:07.488 20:43:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:17:07.488 20:43:50 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:17:07.488 20:43:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:07.488 20:43:50 -- dd/basic_rw.sh@23 -- # count=15 00:17:07.488 20:43:50 -- dd/basic_rw.sh@24 -- # count=15 00:17:07.488 20:43:50 -- dd/basic_rw.sh@25 -- # size=61440 00:17:07.488 20:43:50 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:17:07.488 20:43:50 -- dd/common.sh@98 -- # xtrace_disable 00:17:07.488 20:43:50 -- common/autotest_common.sh@10 -- # set +x 00:17:08.056 20:43:51 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:17:08.056 20:43:51 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:08.056 20:43:51 -- dd/common.sh@31 -- # xtrace_disable 00:17:08.056 20:43:51 -- common/autotest_common.sh@10 -- # set +x 00:17:08.056 { 00:17:08.056 "subsystems": [ 00:17:08.056 { 00:17:08.056 "subsystem": "bdev", 00:17:08.056 "config": [ 00:17:08.056 { 00:17:08.056 "params": { 00:17:08.056 "trtype": "pcie", 00:17:08.056 "name": "Nvme0", 00:17:08.056 "traddr": "0000:00:06.0" 00:17:08.056 }, 00:17:08.056 "method": "bdev_nvme_attach_controller" 00:17:08.056 }, 00:17:08.056 { 00:17:08.056 "method": "bdev_wait_for_examine" 00:17:08.056 } 00:17:08.056 ] 00:17:08.056 } 00:17:08.056 ] 00:17:08.056 } 00:17:08.315 [2024-04-15 20:43:51.601037] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:08.315 [2024-04-15 20:43:51.601177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57455 ] 00:17:08.315 [2024-04-15 20:43:51.745181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.574 [2024-04-15 20:43:51.928167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.080  Copying: 60/60 [kB] (average 19 MBps) 00:17:10.080 00:17:10.339 20:43:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:17:10.339 20:43:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:10.339 20:43:53 -- dd/common.sh@31 -- # xtrace_disable 00:17:10.339 20:43:53 -- common/autotest_common.sh@10 -- # set +x 00:17:10.339 { 00:17:10.339 "subsystems": [ 00:17:10.339 { 00:17:10.339 "subsystem": "bdev", 00:17:10.339 "config": [ 00:17:10.339 { 00:17:10.339 "params": { 00:17:10.339 "trtype": "pcie", 00:17:10.339 "name": "Nvme0", 00:17:10.339 "traddr": "0000:00:06.0" 00:17:10.339 }, 00:17:10.339 "method": "bdev_nvme_attach_controller" 00:17:10.339 }, 00:17:10.339 { 00:17:10.339 "method": "bdev_wait_for_examine" 00:17:10.339 } 00:17:10.339 ] 00:17:10.339 } 00:17:10.339 ] 00:17:10.339 } 00:17:10.339 [2024-04-15 20:43:53.742569] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:10.339 [2024-04-15 20:43:53.742912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57489 ] 00:17:10.598 [2024-04-15 20:43:53.902102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.856 [2024-04-15 20:43:54.126461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.523  Copying: 60/60 [kB] (average 29 MBps) 00:17:12.523 00:17:12.523 20:43:55 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:12.523 20:43:55 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:17:12.523 20:43:55 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:12.523 20:43:55 -- dd/common.sh@11 -- # local nvme_ref= 00:17:12.523 20:43:55 -- dd/common.sh@12 -- # local size=61440 00:17:12.523 20:43:55 -- dd/common.sh@14 -- # local bs=1048576 00:17:12.523 20:43:55 -- dd/common.sh@15 -- # local count=1 00:17:12.523 20:43:55 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:12.523 20:43:55 -- dd/common.sh@18 -- # gen_conf 00:17:12.523 20:43:55 -- dd/common.sh@31 -- # xtrace_disable 00:17:12.523 20:43:55 -- common/autotest_common.sh@10 -- # set +x 00:17:12.523 { 00:17:12.523 "subsystems": [ 00:17:12.523 { 00:17:12.523 "subsystem": "bdev", 00:17:12.523 "config": [ 00:17:12.523 { 00:17:12.523 "params": { 00:17:12.523 "trtype": "pcie", 00:17:12.523 "name": "Nvme0", 00:17:12.523 "traddr": "0000:00:06.0" 00:17:12.523 }, 00:17:12.523 "method": "bdev_nvme_attach_controller" 00:17:12.523 }, 00:17:12.523 { 00:17:12.523 "method": "bdev_wait_for_examine" 00:17:12.523 } 00:17:12.523 ] 00:17:12.523 } 00:17:12.523 ] 00:17:12.523 } 00:17:12.523 [2024-04-15 20:43:55.963757] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:12.523 [2024-04-15 20:43:55.963910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57522 ] 00:17:12.782 [2024-04-15 20:43:56.111663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.040 [2024-04-15 20:43:56.306450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.676  Copying: 1024/1024 [kB] (average 333 MBps) 00:17:14.677 00:17:14.677 20:43:57 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:14.677 20:43:57 -- dd/basic_rw.sh@23 -- # count=15 00:17:14.677 20:43:57 -- dd/basic_rw.sh@24 -- # count=15 00:17:14.677 20:43:57 -- dd/basic_rw.sh@25 -- # size=61440 00:17:14.677 20:43:57 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:17:14.677 20:43:57 -- dd/common.sh@98 -- # xtrace_disable 00:17:14.677 20:43:57 -- common/autotest_common.sh@10 -- # set +x 00:17:15.243 20:43:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:17:15.243 20:43:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:15.243 20:43:58 -- dd/common.sh@31 -- # xtrace_disable 00:17:15.243 20:43:58 -- common/autotest_common.sh@10 -- # set +x 00:17:15.243 { 00:17:15.243 "subsystems": [ 00:17:15.243 { 00:17:15.243 "subsystem": "bdev", 00:17:15.243 "config": [ 00:17:15.243 { 00:17:15.243 "params": { 00:17:15.243 "trtype": "pcie", 00:17:15.243 "name": "Nvme0", 00:17:15.243 "traddr": "0000:00:06.0" 00:17:15.243 }, 00:17:15.243 "method": "bdev_nvme_attach_controller" 00:17:15.243 }, 00:17:15.243 { 00:17:15.243 "method": "bdev_wait_for_examine" 00:17:15.243 } 00:17:15.243 ] 00:17:15.243 } 00:17:15.243 ] 00:17:15.243 } 00:17:15.243 [2024-04-15 20:43:58.642001] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:15.243 [2024-04-15 20:43:58.642148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57565 ] 00:17:15.503 [2024-04-15 20:43:58.796474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.503 [2024-04-15 20:43:58.996168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.447  Copying: 60/60 [kB] (average 58 MBps) 00:17:17.447 00:17:17.447 20:44:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:17:17.447 20:44:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:17.447 20:44:00 -- dd/common.sh@31 -- # xtrace_disable 00:17:17.447 20:44:00 -- common/autotest_common.sh@10 -- # set +x 00:17:17.447 { 00:17:17.447 "subsystems": [ 00:17:17.447 { 00:17:17.447 "subsystem": "bdev", 00:17:17.447 "config": [ 00:17:17.447 { 00:17:17.447 "params": { 00:17:17.447 "trtype": "pcie", 00:17:17.447 "name": "Nvme0", 00:17:17.447 "traddr": "0000:00:06.0" 00:17:17.447 }, 00:17:17.447 "method": "bdev_nvme_attach_controller" 00:17:17.447 }, 00:17:17.447 { 00:17:17.447 "method": "bdev_wait_for_examine" 00:17:17.447 } 00:17:17.447 ] 00:17:17.448 } 00:17:17.448 ] 00:17:17.448 } 00:17:17.448 [2024-04-15 20:44:00.825014] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:17.448 [2024-04-15 20:44:00.825173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57597 ] 00:17:17.706 [2024-04-15 20:44:00.998856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.706 [2024-04-15 20:44:01.193221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.675  Copying: 60/60 [kB] (average 58 MBps) 00:17:19.675 00:17:19.675 20:44:02 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:19.675 20:44:02 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:17:19.675 20:44:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:19.675 20:44:02 -- dd/common.sh@11 -- # local nvme_ref= 00:17:19.675 20:44:02 -- dd/common.sh@12 -- # local size=61440 00:17:19.675 20:44:02 -- dd/common.sh@14 -- # local bs=1048576 00:17:19.675 20:44:02 -- dd/common.sh@15 -- # local count=1 00:17:19.675 20:44:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:19.675 20:44:02 -- dd/common.sh@18 -- # gen_conf 00:17:19.675 20:44:02 -- dd/common.sh@31 -- # xtrace_disable 00:17:19.675 20:44:02 -- common/autotest_common.sh@10 -- # set +x 00:17:19.675 { 00:17:19.675 "subsystems": [ 00:17:19.675 { 00:17:19.675 "subsystem": "bdev", 00:17:19.675 "config": [ 00:17:19.675 { 00:17:19.675 "params": { 00:17:19.675 "trtype": "pcie", 00:17:19.675 "name": "Nvme0", 00:17:19.675 "traddr": "0000:00:06.0" 00:17:19.675 }, 00:17:19.675 "method": "bdev_nvme_attach_controller" 00:17:19.675 }, 00:17:19.675 { 00:17:19.675 "method": "bdev_wait_for_examine" 00:17:19.675 } 00:17:19.675 ] 00:17:19.675 } 00:17:19.675 ] 00:17:19.675 } 00:17:19.675 [2024-04-15 20:44:02.986475] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:19.675 [2024-04-15 20:44:02.986635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57630 ] 00:17:19.675 [2024-04-15 20:44:03.151338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.932 [2024-04-15 20:44:03.352428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.875  Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:21.875 00:17:21.875 20:44:05 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:17:21.875 20:44:05 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:21.875 20:44:05 -- dd/basic_rw.sh@23 -- # count=7 00:17:21.875 20:44:05 -- dd/basic_rw.sh@24 -- # count=7 00:17:21.875 20:44:05 -- dd/basic_rw.sh@25 -- # size=57344 00:17:21.875 20:44:05 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:17:21.875 20:44:05 -- dd/common.sh@98 -- # xtrace_disable 00:17:21.875 20:44:05 -- common/autotest_common.sh@10 -- # set +x 00:17:22.442 20:44:05 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:17:22.442 20:44:05 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:22.442 20:44:05 -- dd/common.sh@31 -- # xtrace_disable 00:17:22.442 20:44:05 -- common/autotest_common.sh@10 -- # set +x 00:17:22.442 { 00:17:22.442 "subsystems": [ 00:17:22.442 { 00:17:22.442 "subsystem": "bdev", 00:17:22.442 "config": [ 00:17:22.442 { 00:17:22.442 "params": { 00:17:22.442 "trtype": "pcie", 00:17:22.442 "name": "Nvme0", 00:17:22.442 "traddr": "0000:00:06.0" 00:17:22.442 }, 00:17:22.442 "method": "bdev_nvme_attach_controller" 00:17:22.442 }, 00:17:22.442 { 00:17:22.442 "method": "bdev_wait_for_examine" 00:17:22.442 } 00:17:22.442 ] 00:17:22.442 } 00:17:22.442 ] 00:17:22.442 } 00:17:22.442 [2024-04-15 20:44:05.797019] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:22.442 [2024-04-15 20:44:05.797186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57674 ] 00:17:22.700 [2024-04-15 20:44:05.952211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.959 [2024-04-15 20:44:06.201682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.593  Copying: 56/56 [kB] (average 27 MBps) 00:17:24.593 00:17:24.593 20:44:07 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:17:24.593 20:44:07 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:24.593 20:44:07 -- dd/common.sh@31 -- # xtrace_disable 00:17:24.593 20:44:07 -- common/autotest_common.sh@10 -- # set +x 00:17:24.593 { 00:17:24.593 "subsystems": [ 00:17:24.593 { 00:17:24.593 "subsystem": "bdev", 00:17:24.593 "config": [ 00:17:24.593 { 00:17:24.593 "params": { 00:17:24.593 "trtype": "pcie", 00:17:24.593 "name": "Nvme0", 00:17:24.593 "traddr": "0000:00:06.0" 00:17:24.593 }, 00:17:24.593 "method": "bdev_nvme_attach_controller" 00:17:24.593 }, 00:17:24.593 { 00:17:24.593 "method": "bdev_wait_for_examine" 00:17:24.593 } 00:17:24.593 ] 00:17:24.593 } 00:17:24.593 ] 00:17:24.593 } 00:17:24.593 [2024-04-15 20:44:08.081667] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:24.593 [2024-04-15 20:44:08.081820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57705 ] 00:17:24.852 [2024-04-15 20:44:08.247678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.111 [2024-04-15 20:44:08.444955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.745  Copying: 56/56 [kB] (average 27 MBps) 00:17:26.745 00:17:26.745 20:44:10 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:26.745 20:44:10 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:17:26.745 20:44:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:26.745 20:44:10 -- dd/common.sh@11 -- # local nvme_ref= 00:17:26.745 20:44:10 -- dd/common.sh@12 -- # local size=57344 00:17:26.745 20:44:10 -- dd/common.sh@14 -- # local bs=1048576 00:17:26.745 20:44:10 -- dd/common.sh@15 -- # local count=1 00:17:26.745 20:44:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:26.745 20:44:10 -- dd/common.sh@18 -- # gen_conf 00:17:26.745 20:44:10 -- dd/common.sh@31 -- # xtrace_disable 00:17:26.745 20:44:10 -- common/autotest_common.sh@10 -- # set +x 00:17:26.745 { 00:17:26.745 "subsystems": [ 00:17:26.745 { 00:17:26.745 "subsystem": "bdev", 00:17:26.745 "config": [ 00:17:26.745 { 00:17:26.745 "params": { 00:17:26.745 "trtype": "pcie", 00:17:26.745 "name": "Nvme0", 00:17:26.745 "traddr": "0000:00:06.0" 00:17:26.745 }, 00:17:26.745 "method": "bdev_nvme_attach_controller" 00:17:26.745 }, 00:17:26.745 { 00:17:26.745 "method": "bdev_wait_for_examine" 00:17:26.745 } 00:17:26.745 ] 00:17:26.745 } 00:17:26.745 ] 00:17:26.745 } 00:17:27.004 [2024-04-15 20:44:10.251133] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:27.004 [2024-04-15 20:44:10.251279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57740 ] 00:17:27.004 [2024-04-15 20:44:10.405089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.263 [2024-04-15 20:44:10.599223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.922  Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:28.922 00:17:28.922 20:44:12 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:28.922 20:44:12 -- dd/basic_rw.sh@23 -- # count=7 00:17:28.922 20:44:12 -- dd/basic_rw.sh@24 -- # count=7 00:17:28.922 20:44:12 -- dd/basic_rw.sh@25 -- # size=57344 00:17:28.922 20:44:12 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:17:28.922 20:44:12 -- dd/common.sh@98 -- # xtrace_disable 00:17:28.922 20:44:12 -- common/autotest_common.sh@10 -- # set +x 00:17:29.487 20:44:12 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:17:29.487 20:44:12 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:29.487 20:44:12 -- dd/common.sh@31 -- # xtrace_disable 00:17:29.487 20:44:12 -- common/autotest_common.sh@10 -- # set +x 00:17:29.487 { 00:17:29.487 "subsystems": [ 00:17:29.487 { 00:17:29.487 "subsystem": "bdev", 00:17:29.487 "config": [ 00:17:29.487 { 00:17:29.487 "params": { 00:17:29.487 "trtype": "pcie", 00:17:29.487 "name": "Nvme0", 00:17:29.487 "traddr": "0000:00:06.0" 00:17:29.487 }, 00:17:29.487 "method": "bdev_nvme_attach_controller" 00:17:29.487 }, 00:17:29.487 { 00:17:29.487 "method": "bdev_wait_for_examine" 00:17:29.487 } 00:17:29.487 ] 00:17:29.487 } 00:17:29.487 ] 00:17:29.487 } 00:17:29.487 [2024-04-15 20:44:12.937022] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:29.487 [2024-04-15 20:44:12.937178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57783 ] 00:17:29.745 [2024-04-15 20:44:13.130340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.005 [2024-04-15 20:44:13.330242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.641  Copying: 56/56 [kB] (average 54 MBps) 00:17:31.641 00:17:31.641 20:44:14 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:17:31.641 20:44:14 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:31.641 20:44:14 -- dd/common.sh@31 -- # xtrace_disable 00:17:31.641 20:44:14 -- common/autotest_common.sh@10 -- # set +x 00:17:31.641 { 00:17:31.641 "subsystems": [ 00:17:31.641 { 00:17:31.641 "subsystem": "bdev", 00:17:31.641 "config": [ 00:17:31.641 { 00:17:31.641 "params": { 00:17:31.641 "trtype": "pcie", 00:17:31.641 "name": "Nvme0", 00:17:31.641 "traddr": "0000:00:06.0" 00:17:31.641 }, 00:17:31.641 "method": "bdev_nvme_attach_controller" 00:17:31.641 }, 00:17:31.641 { 00:17:31.641 "method": "bdev_wait_for_examine" 00:17:31.641 } 00:17:31.641 ] 00:17:31.641 } 00:17:31.641 ] 00:17:31.641 } 00:17:31.641 [2024-04-15 20:44:15.117777] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:31.641 [2024-04-15 20:44:15.117933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57815 ] 00:17:31.901 [2024-04-15 20:44:15.290269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.159 [2024-04-15 20:44:15.493008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.794  Copying: 56/56 [kB] (average 54 MBps) 00:17:33.794 00:17:33.794 20:44:17 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:33.794 20:44:17 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:17:33.794 20:44:17 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:33.794 20:44:17 -- dd/common.sh@11 -- # local nvme_ref= 00:17:33.794 20:44:17 -- dd/common.sh@12 -- # local size=57344 00:17:33.794 20:44:17 -- dd/common.sh@14 -- # local bs=1048576 00:17:33.794 20:44:17 -- dd/common.sh@15 -- # local count=1 00:17:33.794 20:44:17 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:33.794 20:44:17 -- dd/common.sh@18 -- # gen_conf 00:17:33.794 20:44:17 -- dd/common.sh@31 -- # xtrace_disable 00:17:33.794 20:44:17 -- common/autotest_common.sh@10 -- # set +x 00:17:33.794 { 00:17:33.794 "subsystems": [ 00:17:33.794 { 00:17:33.794 "subsystem": "bdev", 00:17:33.794 "config": [ 00:17:33.794 { 00:17:33.794 "params": { 00:17:33.794 "trtype": "pcie", 00:17:33.794 "name": "Nvme0", 00:17:33.794 "traddr": "0000:00:06.0" 00:17:33.794 }, 00:17:33.794 "method": "bdev_nvme_attach_controller" 00:17:33.794 }, 00:17:33.794 { 00:17:33.794 "method": "bdev_wait_for_examine" 00:17:33.794 } 00:17:33.794 ] 00:17:33.794 } 00:17:33.794 ] 00:17:33.794 } 00:17:34.052 [2024-04-15 20:44:17.307956] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:34.052 [2024-04-15 20:44:17.308114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57843 ] 00:17:34.052 [2024-04-15 20:44:17.483767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.311 [2024-04-15 20:44:17.684305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.257  Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:36.257 00:17:36.257 20:44:19 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:17:36.257 20:44:19 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:36.257 20:44:19 -- dd/basic_rw.sh@23 -- # count=3 00:17:36.257 20:44:19 -- dd/basic_rw.sh@24 -- # count=3 00:17:36.257 20:44:19 -- dd/basic_rw.sh@25 -- # size=49152 00:17:36.257 20:44:19 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:17:36.257 20:44:19 -- dd/common.sh@98 -- # xtrace_disable 00:17:36.257 20:44:19 -- common/autotest_common.sh@10 -- # set +x 00:17:36.515 20:44:19 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:17:36.515 20:44:19 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:36.515 20:44:19 -- dd/common.sh@31 -- # xtrace_disable 00:17:36.515 20:44:19 -- common/autotest_common.sh@10 -- # set +x 00:17:36.515 { 00:17:36.515 "subsystems": [ 00:17:36.515 { 00:17:36.515 "subsystem": "bdev", 00:17:36.515 "config": [ 00:17:36.515 { 00:17:36.515 "params": { 00:17:36.515 "trtype": "pcie", 00:17:36.515 "name": "Nvme0", 00:17:36.515 "traddr": "0000:00:06.0" 00:17:36.515 }, 00:17:36.515 "method": "bdev_nvme_attach_controller" 00:17:36.515 }, 00:17:36.515 { 00:17:36.515 "method": "bdev_wait_for_examine" 00:17:36.515 } 00:17:36.515 ] 00:17:36.515 } 00:17:36.515 ] 00:17:36.515 } 00:17:36.774 [2024-04-15 20:44:20.049793] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:36.774 [2024-04-15 20:44:20.049939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57888 ] 00:17:36.774 [2024-04-15 20:44:20.203426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.032 [2024-04-15 20:44:20.392309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.667  Copying: 48/48 [kB] (average 46 MBps) 00:17:38.667 00:17:38.667 20:44:21 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:17:38.667 20:44:21 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:38.667 20:44:21 -- dd/common.sh@31 -- # xtrace_disable 00:17:38.667 20:44:21 -- common/autotest_common.sh@10 -- # set +x 00:17:38.667 { 00:17:38.667 "subsystems": [ 00:17:38.667 { 00:17:38.667 "subsystem": "bdev", 00:17:38.667 "config": [ 00:17:38.667 { 00:17:38.667 "params": { 00:17:38.667 "trtype": "pcie", 00:17:38.667 "name": "Nvme0", 00:17:38.667 "traddr": "0000:00:06.0" 00:17:38.667 }, 00:17:38.667 "method": "bdev_nvme_attach_controller" 00:17:38.667 }, 00:17:38.667 { 00:17:38.667 "method": "bdev_wait_for_examine" 00:17:38.667 } 00:17:38.667 ] 00:17:38.667 } 00:17:38.667 ] 00:17:38.667 } 00:17:38.667 [2024-04-15 20:44:22.139472] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:38.668 [2024-04-15 20:44:22.139620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57921 ] 00:17:38.925 [2024-04-15 20:44:22.294428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.184 [2024-04-15 20:44:22.487507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.857  Copying: 48/48 [kB] (average 46 MBps) 00:17:40.857 00:17:40.857 20:44:24 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:40.857 20:44:24 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:17:40.857 20:44:24 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:40.857 20:44:24 -- dd/common.sh@11 -- # local nvme_ref= 00:17:40.857 20:44:24 -- dd/common.sh@12 -- # local size=49152 00:17:40.857 20:44:24 -- dd/common.sh@14 -- # local bs=1048576 00:17:40.857 20:44:24 -- dd/common.sh@15 -- # local count=1 00:17:40.857 20:44:24 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:40.857 20:44:24 -- dd/common.sh@18 -- # gen_conf 00:17:40.857 20:44:24 -- dd/common.sh@31 -- # xtrace_disable 00:17:40.857 20:44:24 -- common/autotest_common.sh@10 -- # set +x 00:17:40.857 { 00:17:40.857 "subsystems": [ 00:17:40.857 { 00:17:40.857 "subsystem": "bdev", 00:17:40.857 "config": [ 00:17:40.857 { 00:17:40.857 "params": { 00:17:40.857 "trtype": "pcie", 00:17:40.857 "name": "Nvme0", 00:17:40.857 "traddr": "0000:00:06.0" 00:17:40.857 }, 00:17:40.857 "method": "bdev_nvme_attach_controller" 00:17:40.857 }, 00:17:40.857 { 00:17:40.857 "method": "bdev_wait_for_examine" 00:17:40.857 } 00:17:40.857 ] 00:17:40.857 } 00:17:40.857 ] 00:17:40.857 } 00:17:40.857 [2024-04-15 20:44:24.292872] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:40.857 [2024-04-15 20:44:24.293006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57950 ] 00:17:41.116 [2024-04-15 20:44:24.442902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.376 [2024-04-15 20:44:24.632993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.014  Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:43.014 00:17:43.014 20:44:26 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:17:43.014 20:44:26 -- dd/basic_rw.sh@23 -- # count=3 00:17:43.014 20:44:26 -- dd/basic_rw.sh@24 -- # count=3 00:17:43.014 20:44:26 -- dd/basic_rw.sh@25 -- # size=49152 00:17:43.014 20:44:26 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:17:43.014 20:44:26 -- dd/common.sh@98 -- # xtrace_disable 00:17:43.014 20:44:26 -- common/autotest_common.sh@10 -- # set +x 00:17:43.274 20:44:26 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:17:43.532 20:44:26 -- dd/basic_rw.sh@30 -- # gen_conf 00:17:43.532 20:44:26 -- dd/common.sh@31 -- # xtrace_disable 00:17:43.532 20:44:26 -- common/autotest_common.sh@10 -- # set +x 00:17:43.532 { 00:17:43.532 "subsystems": [ 00:17:43.532 { 00:17:43.532 "subsystem": "bdev", 00:17:43.532 "config": [ 00:17:43.532 { 00:17:43.532 "params": { 00:17:43.532 "trtype": "pcie", 00:17:43.532 "name": "Nvme0", 00:17:43.532 "traddr": "0000:00:06.0" 00:17:43.532 }, 00:17:43.532 "method": "bdev_nvme_attach_controller" 00:17:43.532 }, 00:17:43.532 { 00:17:43.532 "method": "bdev_wait_for_examine" 00:17:43.532 } 00:17:43.532 ] 00:17:43.532 } 00:17:43.532 ] 00:17:43.532 } 00:17:43.532 [2024-04-15 20:44:26.919117] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:43.533 [2024-04-15 20:44:26.919277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57987 ] 00:17:43.791 [2024-04-15 20:44:27.074372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.791 [2024-04-15 20:44:27.260681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.736  Copying: 48/48 [kB] (average 46 MBps) 00:17:45.736 00:17:45.736 20:44:28 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:17:45.736 20:44:28 -- dd/basic_rw.sh@37 -- # gen_conf 00:17:45.736 20:44:28 -- dd/common.sh@31 -- # xtrace_disable 00:17:45.736 20:44:28 -- common/autotest_common.sh@10 -- # set +x 00:17:45.736 { 00:17:45.736 "subsystems": [ 00:17:45.736 { 00:17:45.736 "subsystem": "bdev", 00:17:45.736 "config": [ 00:17:45.736 { 00:17:45.736 "params": { 00:17:45.736 "trtype": "pcie", 00:17:45.736 "name": "Nvme0", 00:17:45.736 "traddr": "0000:00:06.0" 00:17:45.736 }, 00:17:45.736 "method": "bdev_nvme_attach_controller" 00:17:45.736 }, 00:17:45.736 { 00:17:45.736 "method": "bdev_wait_for_examine" 00:17:45.736 } 00:17:45.736 ] 00:17:45.736 } 00:17:45.736 ] 00:17:45.736 } 00:17:45.736 [2024-04-15 20:44:29.059224] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:45.736 [2024-04-15 20:44:29.059373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58014 ] 00:17:45.736 [2024-04-15 20:44:29.230513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.995 [2024-04-15 20:44:29.424521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.942  Copying: 48/48 [kB] (average 46 MBps) 00:17:47.942 00:17:47.942 20:44:31 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:47.942 20:44:31 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:17:47.942 20:44:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:47.942 20:44:31 -- dd/common.sh@11 -- # local nvme_ref= 00:17:47.942 20:44:31 -- dd/common.sh@12 -- # local size=49152 00:17:47.942 20:44:31 -- dd/common.sh@14 -- # local bs=1048576 00:17:47.942 20:44:31 -- dd/common.sh@15 -- # local count=1 00:17:47.942 20:44:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:47.942 20:44:31 -- dd/common.sh@18 -- # gen_conf 00:17:47.942 20:44:31 -- dd/common.sh@31 -- # xtrace_disable 00:17:47.942 20:44:31 -- common/autotest_common.sh@10 -- # set +x 00:17:47.942 { 00:17:47.942 "subsystems": [ 00:17:47.942 { 00:17:47.942 "subsystem": "bdev", 00:17:47.942 "config": [ 00:17:47.942 { 00:17:47.942 "params": { 00:17:47.942 "trtype": "pcie", 00:17:47.942 "name": "Nvme0", 00:17:47.942 "traddr": "0000:00:06.0" 00:17:47.942 }, 00:17:47.942 "method": "bdev_nvme_attach_controller" 00:17:47.942 }, 00:17:47.942 { 00:17:47.942 "method": "bdev_wait_for_examine" 00:17:47.942 } 00:17:47.942 ] 00:17:47.942 } 00:17:47.942 ] 00:17:47.942 } 00:17:47.942 [2024-04-15 20:44:31.209597] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:47.942 [2024-04-15 20:44:31.209863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58057 ] 00:17:47.942 [2024-04-15 20:44:31.366708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.200 [2024-04-15 20:44:31.560824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.704  Copying: 1024/1024 [kB] (average 500 MBps) 00:17:49.704 00:17:49.963 00:17:49.963 real 0m42.322s 00:17:49.963 user 0m35.327s 00:17:49.963 sys 0m4.563s 00:17:49.963 20:44:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.963 20:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:49.963 ************************************ 00:17:49.963 END TEST dd_rw 00:17:49.963 ************************************ 00:17:49.963 20:44:33 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:17:49.963 20:44:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:49.963 20:44:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:49.963 20:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:49.963 ************************************ 00:17:49.963 START TEST dd_rw_offset 00:17:49.963 ************************************ 00:17:49.963 20:44:33 -- common/autotest_common.sh@1104 -- # basic_offset 00:17:49.963 20:44:33 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:17:49.963 20:44:33 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:17:49.963 20:44:33 -- dd/common.sh@98 -- # xtrace_disable 00:17:49.963 20:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:49.963 20:44:33 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:17:49.964 20:44:33 -- dd/basic_rw.sh@56 -- # data=v7py0aoc665ddldv10mdjjtsfalfiilgf7r0nub2cld3cp35c5pcqwdioc7tu9pddq3c7k9yv1lely4f5iitq2pd83vjwj9o6dj5u76kezlb9v9h1f5lgbtbp7ngmhpi1i5s3sll8qn30sz9law1ec0bt2kr8xz948lnm1pdgur8xludyy9gmxpjpulj0vjf1b2cesrf47lkvua0cwac31aee4eiqz8r7vvbit6uii7n2jnkdp32sxjjlnm8saglokumteomtvve5v02djw536njoctxt4jj6jf7nkz8vij2stmiui7iyxatday2kq20newlpfbqn96pxks17ti4p4okyixmn9gx7u65q78vzom23mwwkw0kjjsgqes5je3zkb4i4o8ube9d6m2vhkt0wkxnn3btfsw0gb1nnx5to078pjvc1j4pc4khzz78a8v2myyamevdufdf1yt5hgspqdu9ri6akh7v5fi7fkgpfvt6zxo1emfr3dzeqqk3np71e4ng2925m6u6h6pwxhn4fh7odpoksgj7gcqaawa2zcixy6938uhlz27owiy6tqwcr68ed3xlkw2bdwcf638uff9i6pfysu9kz3wt9ueksk26l84r3hhwn3et6ozq3op47ijj43kotcx00uwl6927eg753tv4ufu6k5nlp0ztsckr2dxanwpq8b21gdul515dwi0usjc1kbo3a722scqwn4y3hnpv1i4ziy8kch4xe2fscmf94w85a2wekedb1fjf2ozzh0vu4utigyp8h2go0nhhkfm2bqmos0afivpzvtsdlee0az6uzedw11oluu9pkcaftmjj3dvl2qqjgj2pp85ow9sxvwzgzjly1i9j0ewtcsmm09ddz83jef4tgp3d17zw9wygtov273ot1mqo7n8jgkhjlou77ybgk0imzge8bk57noaptxwk05k3apvpeww91xma659oy1qpy9ejv7m8nm7dk0c34taxophtp1weekc5k7i85dtxc9v0118jryjah5jc28ijrqikgxxsntpo78l0d11jdpfj4uz5pko2y5yqfcj5xpekah0296zhal8aolzx3ljqjq6hcj5phqpsuwl1v35ewyhd826tqupi3gt92u1wimvs3xb07b2zunclopa0nymgbd907g70qdc1y8b87rcio9cgs3qpa243xyeeat338lrfjdp6bu95zmva8ryresdrqsxygym6szag129dy7c654smpbpwld1e9a6x6cenms6ck2gdvmg3ipm7u2hcxbjb8hmjfirjuux2nppyaxjg2kukbagirw6txymuadnhc79vsu2ctzz85ev6fwbdxigevdhjswl4h04n7cb1rmcbxcsrf6s26bg7358ommuv0li0v5a8oaj8yso5piig7iwb7kiqs47yimbzhz4oe5qpz1bni19ugndnz4zxiioms44inryefoyk62cji1idrk0cr2e5tmwen9kjnehkh180x1mzergtwk9dkdd6qpsp1jmhlo7wnq6p77qrq6fqinhaij5qe6wymdlot4ujclm9c688arvxzdm4ewwoj92xnmma85kp90l2lvdwx8igvt1dhbq15aw58349setzp2q2uot78xj4eb79ucqika99dmd77idq4xhniyrddmvctjrfohbpcb6oe7z64yu4seegxsqp16ybd2ixn0l93dle1bvr1tdbjk3qms2ofiyv9gg2ua8m4s4dn3n69m1cr198wxls7gz73ajc0r0scipn5bsnt9garnogiotaepa854ih2utjmucgk83y0lmh6bxlt55ht68hvrvkxp7pnxlc8pckhylyq3ukip0yquruyc9olwp03hl1bmr43rydldlf7lds16egcux90ywsj8s4qgl5qsppj2jxm70jn3dbiamco27pzax6j47aze7amob7vnmnccmnqdfx2fm0p9zg1jxbx02rfadh2r6mq8wnb918yivnocisqrdhq1m5tye9skztt7fcpkwjsxnqhoagnsezyd2a6fpacwapuh74nmq9a6jsbvlxekwcrozmzp6i0lh4lx92gy7it8vncn8vjl5vsh89ry8104t3tbuxur8d42tdf50el9equjgrxpr4b9fnoa2c8g21tm5vtt90h0wx5cbbs5bc0hk5duvxb4xc6ptr3aqonk6xp76rc2kphbgozswodv1ekpykbpo39romh2smssc7cw0i61g8u7va5wkvivl7a0pgnclmcw990kljye0fmgxuan2h6h01e6pd91bjcvzh7ti06i1yq6q7snf4cupmbt3kzrn6kw4fdquqz8e5j7u1to9zo592lqkbjwcb80oytqazzp0a8h8j7zgw9po846j55xq6qxzft7ezkmwbusfh20ood5d5g2xrs46k5owztuurqunnnqdc1twz65gpia8d77rh2uwjudbe9miohe1drzx8ijvrpvi4pm9nv85g7f0iawh55d4oibh3q6o6gmjsoc2q9au1bcdemufoxdu8jh3neenoyipvwreo3om7hspw8a9cw1t84cpdhurv6acp16sj2i8wy7htohcsaw06pq4w3gnaymjcl8549xg6jspkaem4xpfgf9g30l3j43np9vodj9apa9lawrua6q830y5529i4vqqb29nh56mkzveebyzgye60j6wbdkrg22w6owpat2dq9dvz2zswd0cu876i52dfj3yeckqwmwf7ivtt9mzvzqgbj7honqlbmepbfuun6s3pgs41exhxmfpmvmkpack3l580he1a3oumgnxj8f4ylywfuevmy5y5525cehfa9x5g30u1nv8tttxa6uy1mxaoze0988h42rrlsqyp0m11a4ic9fnbpqlk1144l5asrgq3nsg5i3ltd5as5um424ar55nswx4yxtnl9utx0xg2hdzbtv2wijxzqfgq9qotq0rearqecklw8g3u2mz2dyg1nxv12nu05cxjafwz28l5cs02tgb0qllir9478yvo58prlzhfb4fov3fq0s3wdk7ob557ph2wzc1wj0aeq9a9fucbyee67j3g4i8hekzyuadf3xepj2qll9ot0vrka8x32iu2aev2ozit2uh4gqje0h2kbfv4azrcjqto94i2mzkloc6157380jtofb5s70cg20b9qnp1qki7kdd6ifybjk2l9oskfsjjbd8vw4317091wognifsgwjd17fk2dgksd8vu97fq1fswh69a6p9askhtubn252mf665cuyf5l523dxd9xnen4uae8obd7x20913wvszbs8rdxw92ywr750xy5cykadksdjejrf3msbh7lfhto055vr7f9hc3kjt0d134kod1vh2ylkvu6u23koigp2vunbgwxnuycbw26vl3dxjrkhegzjazj83i6jp1avpzyhdc6xaac12ocrfzc1yfv9aasm97y7ekrkqlczuohpzn53dmq0nszmx149avt06c7khdz7w5wat84f123busb2v0jb10s86rs9eyxrrjg32if1vot9zkgwqsx8739l7kfsog29em8c4sgxfgg2jv4byaq9ulvqi6ez6rb3uhjvdpk7368o5yr93aukqoompxsfa4lte9aw3nh2pfi4qteztli2awn5r3co0f5k1pumtppmjsqrbmem4r89wb73clzz8e9cmdt6utllgcee7c07y5pmbf27m1mmkbf8zss5vvpnv3wh6if59f4hteu64r8n9vp394yazlo3qedv1ex88036bssiii2fevkdradcjf1ui7bgg5634513wehy071eytu4m7d8t1c8a5mye9doefso4vxyn73v7swq4nhc8uw5i7zdyb101vnira72q080pu9q1wpv6iit4qojhvym65i83tzfgiuksvukccvbk8nuqahcrcco7ffzi7stc1zra6rb1dp7746k00s5u01qk6a448gmh9zbkst4iu7rt3g6eb3kbjo2l2uu8sndka3bqb8x3cnxsxqexttzu0gq13ksxhj1i47ki5rv3lyomiphcm2c1qogcj8q4va5rnko3lxkr8 00:17:49.964 20:44:33 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:17:49.964 20:44:33 -- dd/basic_rw.sh@59 -- # gen_conf 00:17:49.964 20:44:33 -- dd/common.sh@31 -- # xtrace_disable 00:17:49.964 20:44:33 -- common/autotest_common.sh@10 -- # set +x 00:17:49.964 { 00:17:49.964 "subsystems": [ 00:17:49.964 { 00:17:49.964 "subsystem": "bdev", 00:17:49.964 "config": [ 00:17:49.964 { 00:17:49.964 "params": { 00:17:49.964 "trtype": "pcie", 00:17:49.964 "name": "Nvme0", 00:17:49.964 "traddr": "0000:00:06.0" 00:17:49.964 }, 00:17:49.964 "method": "bdev_nvme_attach_controller" 00:17:49.964 }, 00:17:49.964 { 00:17:49.964 "method": "bdev_wait_for_examine" 00:17:49.964 } 00:17:49.964 ] 00:17:49.964 } 00:17:49.964 ] 00:17:49.964 } 00:17:50.222 [2024-04-15 20:44:33.476708] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:50.222 [2024-04-15 20:44:33.476851] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58110 ] 00:17:50.222 [2024-04-15 20:44:33.648732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.481 [2024-04-15 20:44:33.846753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.987  Copying: 4096/4096 [B] (average 4000 kBps) 00:17:51.987 00:17:52.244 20:44:35 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:17:52.244 20:44:35 -- dd/basic_rw.sh@65 -- # gen_conf 00:17:52.244 20:44:35 -- dd/common.sh@31 -- # xtrace_disable 00:17:52.244 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:17:52.244 { 00:17:52.244 "subsystems": [ 00:17:52.244 { 00:17:52.244 "subsystem": "bdev", 00:17:52.244 "config": [ 00:17:52.244 { 00:17:52.244 "params": { 00:17:52.244 "trtype": "pcie", 00:17:52.244 "name": "Nvme0", 00:17:52.244 "traddr": "0000:00:06.0" 00:17:52.244 }, 00:17:52.244 "method": "bdev_nvme_attach_controller" 00:17:52.244 }, 00:17:52.244 { 00:17:52.244 "method": "bdev_wait_for_examine" 00:17:52.244 } 00:17:52.244 ] 00:17:52.244 } 00:17:52.244 ] 00:17:52.244 } 00:17:52.244 [2024-04-15 20:44:35.647194] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:52.244 [2024-04-15 20:44:35.647423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58146 ] 00:17:52.503 [2024-04-15 20:44:35.854125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.760 [2024-04-15 20:44:36.056405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.397  Copying: 4096/4096 [B] (average 4000 kBps) 00:17:54.397 00:17:54.397 20:44:37 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:17:54.397 ************************************ 00:17:54.397 END TEST dd_rw_offset 00:17:54.397 ************************************ 00:17:54.398 20:44:37 -- dd/basic_rw.sh@72 -- # [[ v7py0aoc665ddldv10mdjjtsfalfiilgf7r0nub2cld3cp35c5pcqwdioc7tu9pddq3c7k9yv1lely4f5iitq2pd83vjwj9o6dj5u76kezlb9v9h1f5lgbtbp7ngmhpi1i5s3sll8qn30sz9law1ec0bt2kr8xz948lnm1pdgur8xludyy9gmxpjpulj0vjf1b2cesrf47lkvua0cwac31aee4eiqz8r7vvbit6uii7n2jnkdp32sxjjlnm8saglokumteomtvve5v02djw536njoctxt4jj6jf7nkz8vij2stmiui7iyxatday2kq20newlpfbqn96pxks17ti4p4okyixmn9gx7u65q78vzom23mwwkw0kjjsgqes5je3zkb4i4o8ube9d6m2vhkt0wkxnn3btfsw0gb1nnx5to078pjvc1j4pc4khzz78a8v2myyamevdufdf1yt5hgspqdu9ri6akh7v5fi7fkgpfvt6zxo1emfr3dzeqqk3np71e4ng2925m6u6h6pwxhn4fh7odpoksgj7gcqaawa2zcixy6938uhlz27owiy6tqwcr68ed3xlkw2bdwcf638uff9i6pfysu9kz3wt9ueksk26l84r3hhwn3et6ozq3op47ijj43kotcx00uwl6927eg753tv4ufu6k5nlp0ztsckr2dxanwpq8b21gdul515dwi0usjc1kbo3a722scqwn4y3hnpv1i4ziy8kch4xe2fscmf94w85a2wekedb1fjf2ozzh0vu4utigyp8h2go0nhhkfm2bqmos0afivpzvtsdlee0az6uzedw11oluu9pkcaftmjj3dvl2qqjgj2pp85ow9sxvwzgzjly1i9j0ewtcsmm09ddz83jef4tgp3d17zw9wygtov273ot1mqo7n8jgkhjlou77ybgk0imzge8bk57noaptxwk05k3apvpeww91xma659oy1qpy9ejv7m8nm7dk0c34taxophtp1weekc5k7i85dtxc9v0118jryjah5jc28ijrqikgxxsntpo78l0d11jdpfj4uz5pko2y5yqfcj5xpekah0296zhal8aolzx3ljqjq6hcj5phqpsuwl1v35ewyhd826tqupi3gt92u1wimvs3xb07b2zunclopa0nymgbd907g70qdc1y8b87rcio9cgs3qpa243xyeeat338lrfjdp6bu95zmva8ryresdrqsxygym6szag129dy7c654smpbpwld1e9a6x6cenms6ck2gdvmg3ipm7u2hcxbjb8hmjfirjuux2nppyaxjg2kukbagirw6txymuadnhc79vsu2ctzz85ev6fwbdxigevdhjswl4h04n7cb1rmcbxcsrf6s26bg7358ommuv0li0v5a8oaj8yso5piig7iwb7kiqs47yimbzhz4oe5qpz1bni19ugndnz4zxiioms44inryefoyk62cji1idrk0cr2e5tmwen9kjnehkh180x1mzergtwk9dkdd6qpsp1jmhlo7wnq6p77qrq6fqinhaij5qe6wymdlot4ujclm9c688arvxzdm4ewwoj92xnmma85kp90l2lvdwx8igvt1dhbq15aw58349setzp2q2uot78xj4eb79ucqika99dmd77idq4xhniyrddmvctjrfohbpcb6oe7z64yu4seegxsqp16ybd2ixn0l93dle1bvr1tdbjk3qms2ofiyv9gg2ua8m4s4dn3n69m1cr198wxls7gz73ajc0r0scipn5bsnt9garnogiotaepa854ih2utjmucgk83y0lmh6bxlt55ht68hvrvkxp7pnxlc8pckhylyq3ukip0yquruyc9olwp03hl1bmr43rydldlf7lds16egcux90ywsj8s4qgl5qsppj2jxm70jn3dbiamco27pzax6j47aze7amob7vnmnccmnqdfx2fm0p9zg1jxbx02rfadh2r6mq8wnb918yivnocisqrdhq1m5tye9skztt7fcpkwjsxnqhoagnsezyd2a6fpacwapuh74nmq9a6jsbvlxekwcrozmzp6i0lh4lx92gy7it8vncn8vjl5vsh89ry8104t3tbuxur8d42tdf50el9equjgrxpr4b9fnoa2c8g21tm5vtt90h0wx5cbbs5bc0hk5duvxb4xc6ptr3aqonk6xp76rc2kphbgozswodv1ekpykbpo39romh2smssc7cw0i61g8u7va5wkvivl7a0pgnclmcw990kljye0fmgxuan2h6h01e6pd91bjcvzh7ti06i1yq6q7snf4cupmbt3kzrn6kw4fdquqz8e5j7u1to9zo592lqkbjwcb80oytqazzp0a8h8j7zgw9po846j55xq6qxzft7ezkmwbusfh20ood5d5g2xrs46k5owztuurqunnnqdc1twz65gpia8d77rh2uwjudbe9miohe1drzx8ijvrpvi4pm9nv85g7f0iawh55d4oibh3q6o6gmjsoc2q9au1bcdemufoxdu8jh3neenoyipvwreo3om7hspw8a9cw1t84cpdhurv6acp16sj2i8wy7htohcsaw06pq4w3gnaymjcl8549xg6jspkaem4xpfgf9g30l3j43np9vodj9apa9lawrua6q830y5529i4vqqb29nh56mkzveebyzgye60j6wbdkrg22w6owpat2dq9dvz2zswd0cu876i52dfj3yeckqwmwf7ivtt9mzvzqgbj7honqlbmepbfuun6s3pgs41exhxmfpmvmkpack3l580he1a3oumgnxj8f4ylywfuevmy5y5525cehfa9x5g30u1nv8tttxa6uy1mxaoze0988h42rrlsqyp0m11a4ic9fnbpqlk1144l5asrgq3nsg5i3ltd5as5um424ar55nswx4yxtnl9utx0xg2hdzbtv2wijxzqfgq9qotq0rearqecklw8g3u2mz2dyg1nxv12nu05cxjafwz28l5cs02tgb0qllir9478yvo58prlzhfb4fov3fq0s3wdk7ob557ph2wzc1wj0aeq9a9fucbyee67j3g4i8hekzyuadf3xepj2qll9ot0vrka8x32iu2aev2ozit2uh4gqje0h2kbfv4azrcjqto94i2mzkloc6157380jtofb5s70cg20b9qnp1qki7kdd6ifybjk2l9oskfsjjbd8vw4317091wognifsgwjd17fk2dgksd8vu97fq1fswh69a6p9askhtubn252mf665cuyf5l523dxd9xnen4uae8obd7x20913wvszbs8rdxw92ywr750xy5cykadksdjejrf3msbh7lfhto055vr7f9hc3kjt0d134kod1vh2ylkvu6u23koigp2vunbgwxnuycbw26vl3dxjrkhegzjazj83i6jp1avpzyhdc6xaac12ocrfzc1yfv9aasm97y7ekrkqlczuohpzn53dmq0nszmx149avt06c7khdz7w5wat84f123busb2v0jb10s86rs9eyxrrjg32if1vot9zkgwqsx8739l7kfsog29em8c4sgxfgg2jv4byaq9ulvqi6ez6rb3uhjvdpk7368o5yr93aukqoompxsfa4lte9aw3nh2pfi4qteztli2awn5r3co0f5k1pumtppmjsqrbmem4r89wb73clzz8e9cmdt6utllgcee7c07y5pmbf27m1mmkbf8zss5vvpnv3wh6if59f4hteu64r8n9vp394yazlo3qedv1ex88036bssiii2fevkdradcjf1ui7bgg5634513wehy071eytu4m7d8t1c8a5mye9doefso4vxyn73v7swq4nhc8uw5i7zdyb101vnira72q080pu9q1wpv6iit4qojhvym65i83tzfgiuksvukccvbk8nuqahcrcco7ffzi7stc1zra6rb1dp7746k00s5u01qk6a448gmh9zbkst4iu7rt3g6eb3kbjo2l2uu8sndka3bqb8x3cnxsxqexttzu0gq13ksxhj1i47ki5rv3lyomiphcm2c1qogcj8q4va5rnko3lxkr8 == \v\7\p\y\0\a\o\c\6\6\5\d\d\l\d\v\1\0\m\d\j\j\t\s\f\a\l\f\i\i\l\g\f\7\r\0\n\u\b\2\c\l\d\3\c\p\3\5\c\5\p\c\q\w\d\i\o\c\7\t\u\9\p\d\d\q\3\c\7\k\9\y\v\1\l\e\l\y\4\f\5\i\i\t\q\2\p\d\8\3\v\j\w\j\9\o\6\d\j\5\u\7\6\k\e\z\l\b\9\v\9\h\1\f\5\l\g\b\t\b\p\7\n\g\m\h\p\i\1\i\5\s\3\s\l\l\8\q\n\3\0\s\z\9\l\a\w\1\e\c\0\b\t\2\k\r\8\x\z\9\4\8\l\n\m\1\p\d\g\u\r\8\x\l\u\d\y\y\9\g\m\x\p\j\p\u\l\j\0\v\j\f\1\b\2\c\e\s\r\f\4\7\l\k\v\u\a\0\c\w\a\c\3\1\a\e\e\4\e\i\q\z\8\r\7\v\v\b\i\t\6\u\i\i\7\n\2\j\n\k\d\p\3\2\s\x\j\j\l\n\m\8\s\a\g\l\o\k\u\m\t\e\o\m\t\v\v\e\5\v\0\2\d\j\w\5\3\6\n\j\o\c\t\x\t\4\j\j\6\j\f\7\n\k\z\8\v\i\j\2\s\t\m\i\u\i\7\i\y\x\a\t\d\a\y\2\k\q\2\0\n\e\w\l\p\f\b\q\n\9\6\p\x\k\s\1\7\t\i\4\p\4\o\k\y\i\x\m\n\9\g\x\7\u\6\5\q\7\8\v\z\o\m\2\3\m\w\w\k\w\0\k\j\j\s\g\q\e\s\5\j\e\3\z\k\b\4\i\4\o\8\u\b\e\9\d\6\m\2\v\h\k\t\0\w\k\x\n\n\3\b\t\f\s\w\0\g\b\1\n\n\x\5\t\o\0\7\8\p\j\v\c\1\j\4\p\c\4\k\h\z\z\7\8\a\8\v\2\m\y\y\a\m\e\v\d\u\f\d\f\1\y\t\5\h\g\s\p\q\d\u\9\r\i\6\a\k\h\7\v\5\f\i\7\f\k\g\p\f\v\t\6\z\x\o\1\e\m\f\r\3\d\z\e\q\q\k\3\n\p\7\1\e\4\n\g\2\9\2\5\m\6\u\6\h\6\p\w\x\h\n\4\f\h\7\o\d\p\o\k\s\g\j\7\g\c\q\a\a\w\a\2\z\c\i\x\y\6\9\3\8\u\h\l\z\2\7\o\w\i\y\6\t\q\w\c\r\6\8\e\d\3\x\l\k\w\2\b\d\w\c\f\6\3\8\u\f\f\9\i\6\p\f\y\s\u\9\k\z\3\w\t\9\u\e\k\s\k\2\6\l\8\4\r\3\h\h\w\n\3\e\t\6\o\z\q\3\o\p\4\7\i\j\j\4\3\k\o\t\c\x\0\0\u\w\l\6\9\2\7\e\g\7\5\3\t\v\4\u\f\u\6\k\5\n\l\p\0\z\t\s\c\k\r\2\d\x\a\n\w\p\q\8\b\2\1\g\d\u\l\5\1\5\d\w\i\0\u\s\j\c\1\k\b\o\3\a\7\2\2\s\c\q\w\n\4\y\3\h\n\p\v\1\i\4\z\i\y\8\k\c\h\4\x\e\2\f\s\c\m\f\9\4\w\8\5\a\2\w\e\k\e\d\b\1\f\j\f\2\o\z\z\h\0\v\u\4\u\t\i\g\y\p\8\h\2\g\o\0\n\h\h\k\f\m\2\b\q\m\o\s\0\a\f\i\v\p\z\v\t\s\d\l\e\e\0\a\z\6\u\z\e\d\w\1\1\o\l\u\u\9\p\k\c\a\f\t\m\j\j\3\d\v\l\2\q\q\j\g\j\2\p\p\8\5\o\w\9\s\x\v\w\z\g\z\j\l\y\1\i\9\j\0\e\w\t\c\s\m\m\0\9\d\d\z\8\3\j\e\f\4\t\g\p\3\d\1\7\z\w\9\w\y\g\t\o\v\2\7\3\o\t\1\m\q\o\7\n\8\j\g\k\h\j\l\o\u\7\7\y\b\g\k\0\i\m\z\g\e\8\b\k\5\7\n\o\a\p\t\x\w\k\0\5\k\3\a\p\v\p\e\w\w\9\1\x\m\a\6\5\9\o\y\1\q\p\y\9\e\j\v\7\m\8\n\m\7\d\k\0\c\3\4\t\a\x\o\p\h\t\p\1\w\e\e\k\c\5\k\7\i\8\5\d\t\x\c\9\v\0\1\1\8\j\r\y\j\a\h\5\j\c\2\8\i\j\r\q\i\k\g\x\x\s\n\t\p\o\7\8\l\0\d\1\1\j\d\p\f\j\4\u\z\5\p\k\o\2\y\5\y\q\f\c\j\5\x\p\e\k\a\h\0\2\9\6\z\h\a\l\8\a\o\l\z\x\3\l\j\q\j\q\6\h\c\j\5\p\h\q\p\s\u\w\l\1\v\3\5\e\w\y\h\d\8\2\6\t\q\u\p\i\3\g\t\9\2\u\1\w\i\m\v\s\3\x\b\0\7\b\2\z\u\n\c\l\o\p\a\0\n\y\m\g\b\d\9\0\7\g\7\0\q\d\c\1\y\8\b\8\7\r\c\i\o\9\c\g\s\3\q\p\a\2\4\3\x\y\e\e\a\t\3\3\8\l\r\f\j\d\p\6\b\u\9\5\z\m\v\a\8\r\y\r\e\s\d\r\q\s\x\y\g\y\m\6\s\z\a\g\1\2\9\d\y\7\c\6\5\4\s\m\p\b\p\w\l\d\1\e\9\a\6\x\6\c\e\n\m\s\6\c\k\2\g\d\v\m\g\3\i\p\m\7\u\2\h\c\x\b\j\b\8\h\m\j\f\i\r\j\u\u\x\2\n\p\p\y\a\x\j\g\2\k\u\k\b\a\g\i\r\w\6\t\x\y\m\u\a\d\n\h\c\7\9\v\s\u\2\c\t\z\z\8\5\e\v\6\f\w\b\d\x\i\g\e\v\d\h\j\s\w\l\4\h\0\4\n\7\c\b\1\r\m\c\b\x\c\s\r\f\6\s\2\6\b\g\7\3\5\8\o\m\m\u\v\0\l\i\0\v\5\a\8\o\a\j\8\y\s\o\5\p\i\i\g\7\i\w\b\7\k\i\q\s\4\7\y\i\m\b\z\h\z\4\o\e\5\q\p\z\1\b\n\i\1\9\u\g\n\d\n\z\4\z\x\i\i\o\m\s\4\4\i\n\r\y\e\f\o\y\k\6\2\c\j\i\1\i\d\r\k\0\c\r\2\e\5\t\m\w\e\n\9\k\j\n\e\h\k\h\1\8\0\x\1\m\z\e\r\g\t\w\k\9\d\k\d\d\6\q\p\s\p\1\j\m\h\l\o\7\w\n\q\6\p\7\7\q\r\q\6\f\q\i\n\h\a\i\j\5\q\e\6\w\y\m\d\l\o\t\4\u\j\c\l\m\9\c\6\8\8\a\r\v\x\z\d\m\4\e\w\w\o\j\9\2\x\n\m\m\a\8\5\k\p\9\0\l\2\l\v\d\w\x\8\i\g\v\t\1\d\h\b\q\1\5\a\w\5\8\3\4\9\s\e\t\z\p\2\q\2\u\o\t\7\8\x\j\4\e\b\7\9\u\c\q\i\k\a\9\9\d\m\d\7\7\i\d\q\4\x\h\n\i\y\r\d\d\m\v\c\t\j\r\f\o\h\b\p\c\b\6\o\e\7\z\6\4\y\u\4\s\e\e\g\x\s\q\p\1\6\y\b\d\2\i\x\n\0\l\9\3\d\l\e\1\b\v\r\1\t\d\b\j\k\3\q\m\s\2\o\f\i\y\v\9\g\g\2\u\a\8\m\4\s\4\d\n\3\n\6\9\m\1\c\r\1\9\8\w\x\l\s\7\g\z\7\3\a\j\c\0\r\0\s\c\i\p\n\5\b\s\n\t\9\g\a\r\n\o\g\i\o\t\a\e\p\a\8\5\4\i\h\2\u\t\j\m\u\c\g\k\8\3\y\0\l\m\h\6\b\x\l\t\5\5\h\t\6\8\h\v\r\v\k\x\p\7\p\n\x\l\c\8\p\c\k\h\y\l\y\q\3\u\k\i\p\0\y\q\u\r\u\y\c\9\o\l\w\p\0\3\h\l\1\b\m\r\4\3\r\y\d\l\d\l\f\7\l\d\s\1\6\e\g\c\u\x\9\0\y\w\s\j\8\s\4\q\g\l\5\q\s\p\p\j\2\j\x\m\7\0\j\n\3\d\b\i\a\m\c\o\2\7\p\z\a\x\6\j\4\7\a\z\e\7\a\m\o\b\7\v\n\m\n\c\c\m\n\q\d\f\x\2\f\m\0\p\9\z\g\1\j\x\b\x\0\2\r\f\a\d\h\2\r\6\m\q\8\w\n\b\9\1\8\y\i\v\n\o\c\i\s\q\r\d\h\q\1\m\5\t\y\e\9\s\k\z\t\t\7\f\c\p\k\w\j\s\x\n\q\h\o\a\g\n\s\e\z\y\d\2\a\6\f\p\a\c\w\a\p\u\h\7\4\n\m\q\9\a\6\j\s\b\v\l\x\e\k\w\c\r\o\z\m\z\p\6\i\0\l\h\4\l\x\9\2\g\y\7\i\t\8\v\n\c\n\8\v\j\l\5\v\s\h\8\9\r\y\8\1\0\4\t\3\t\b\u\x\u\r\8\d\4\2\t\d\f\5\0\e\l\9\e\q\u\j\g\r\x\p\r\4\b\9\f\n\o\a\2\c\8\g\2\1\t\m\5\v\t\t\9\0\h\0\w\x\5\c\b\b\s\5\b\c\0\h\k\5\d\u\v\x\b\4\x\c\6\p\t\r\3\a\q\o\n\k\6\x\p\7\6\r\c\2\k\p\h\b\g\o\z\s\w\o\d\v\1\e\k\p\y\k\b\p\o\3\9\r\o\m\h\2\s\m\s\s\c\7\c\w\0\i\6\1\g\8\u\7\v\a\5\w\k\v\i\v\l\7\a\0\p\g\n\c\l\m\c\w\9\9\0\k\l\j\y\e\0\f\m\g\x\u\a\n\2\h\6\h\0\1\e\6\p\d\9\1\b\j\c\v\z\h\7\t\i\0\6\i\1\y\q\6\q\7\s\n\f\4\c\u\p\m\b\t\3\k\z\r\n\6\k\w\4\f\d\q\u\q\z\8\e\5\j\7\u\1\t\o\9\z\o\5\9\2\l\q\k\b\j\w\c\b\8\0\o\y\t\q\a\z\z\p\0\a\8\h\8\j\7\z\g\w\9\p\o\8\4\6\j\5\5\x\q\6\q\x\z\f\t\7\e\z\k\m\w\b\u\s\f\h\2\0\o\o\d\5\d\5\g\2\x\r\s\4\6\k\5\o\w\z\t\u\u\r\q\u\n\n\n\q\d\c\1\t\w\z\6\5\g\p\i\a\8\d\7\7\r\h\2\u\w\j\u\d\b\e\9\m\i\o\h\e\1\d\r\z\x\8\i\j\v\r\p\v\i\4\p\m\9\n\v\8\5\g\7\f\0\i\a\w\h\5\5\d\4\o\i\b\h\3\q\6\o\6\g\m\j\s\o\c\2\q\9\a\u\1\b\c\d\e\m\u\f\o\x\d\u\8\j\h\3\n\e\e\n\o\y\i\p\v\w\r\e\o\3\o\m\7\h\s\p\w\8\a\9\c\w\1\t\8\4\c\p\d\h\u\r\v\6\a\c\p\1\6\s\j\2\i\8\w\y\7\h\t\o\h\c\s\a\w\0\6\p\q\4\w\3\g\n\a\y\m\j\c\l\8\5\4\9\x\g\6\j\s\p\k\a\e\m\4\x\p\f\g\f\9\g\3\0\l\3\j\4\3\n\p\9\v\o\d\j\9\a\p\a\9\l\a\w\r\u\a\6\q\8\3\0\y\5\5\2\9\i\4\v\q\q\b\2\9\n\h\5\6\m\k\z\v\e\e\b\y\z\g\y\e\6\0\j\6\w\b\d\k\r\g\2\2\w\6\o\w\p\a\t\2\d\q\9\d\v\z\2\z\s\w\d\0\c\u\8\7\6\i\5\2\d\f\j\3\y\e\c\k\q\w\m\w\f\7\i\v\t\t\9\m\z\v\z\q\g\b\j\7\h\o\n\q\l\b\m\e\p\b\f\u\u\n\6\s\3\p\g\s\4\1\e\x\h\x\m\f\p\m\v\m\k\p\a\c\k\3\l\5\8\0\h\e\1\a\3\o\u\m\g\n\x\j\8\f\4\y\l\y\w\f\u\e\v\m\y\5\y\5\5\2\5\c\e\h\f\a\9\x\5\g\3\0\u\1\n\v\8\t\t\t\x\a\6\u\y\1\m\x\a\o\z\e\0\9\8\8\h\4\2\r\r\l\s\q\y\p\0\m\1\1\a\4\i\c\9\f\n\b\p\q\l\k\1\1\4\4\l\5\a\s\r\g\q\3\n\s\g\5\i\3\l\t\d\5\a\s\5\u\m\4\2\4\a\r\5\5\n\s\w\x\4\y\x\t\n\l\9\u\t\x\0\x\g\2\h\d\z\b\t\v\2\w\i\j\x\z\q\f\g\q\9\q\o\t\q\0\r\e\a\r\q\e\c\k\l\w\8\g\3\u\2\m\z\2\d\y\g\1\n\x\v\1\2\n\u\0\5\c\x\j\a\f\w\z\2\8\l\5\c\s\0\2\t\g\b\0\q\l\l\i\r\9\4\7\8\y\v\o\5\8\p\r\l\z\h\f\b\4\f\o\v\3\f\q\0\s\3\w\d\k\7\o\b\5\5\7\p\h\2\w\z\c\1\w\j\0\a\e\q\9\a\9\f\u\c\b\y\e\e\6\7\j\3\g\4\i\8\h\e\k\z\y\u\a\d\f\3\x\e\p\j\2\q\l\l\9\o\t\0\v\r\k\a\8\x\3\2\i\u\2\a\e\v\2\o\z\i\t\2\u\h\4\g\q\j\e\0\h\2\k\b\f\v\4\a\z\r\c\j\q\t\o\9\4\i\2\m\z\k\l\o\c\6\1\5\7\3\8\0\j\t\o\f\b\5\s\7\0\c\g\2\0\b\9\q\n\p\1\q\k\i\7\k\d\d\6\i\f\y\b\j\k\2\l\9\o\s\k\f\s\j\j\b\d\8\v\w\4\3\1\7\0\9\1\w\o\g\n\i\f\s\g\w\j\d\1\7\f\k\2\d\g\k\s\d\8\v\u\9\7\f\q\1\f\s\w\h\6\9\a\6\p\9\a\s\k\h\t\u\b\n\2\5\2\m\f\6\6\5\c\u\y\f\5\l\5\2\3\d\x\d\9\x\n\e\n\4\u\a\e\8\o\b\d\7\x\2\0\9\1\3\w\v\s\z\b\s\8\r\d\x\w\9\2\y\w\r\7\5\0\x\y\5\c\y\k\a\d\k\s\d\j\e\j\r\f\3\m\s\b\h\7\l\f\h\t\o\0\5\5\v\r\7\f\9\h\c\3\k\j\t\0\d\1\3\4\k\o\d\1\v\h\2\y\l\k\v\u\6\u\2\3\k\o\i\g\p\2\v\u\n\b\g\w\x\n\u\y\c\b\w\2\6\v\l\3\d\x\j\r\k\h\e\g\z\j\a\z\j\8\3\i\6\j\p\1\a\v\p\z\y\h\d\c\6\x\a\a\c\1\2\o\c\r\f\z\c\1\y\f\v\9\a\a\s\m\9\7\y\7\e\k\r\k\q\l\c\z\u\o\h\p\z\n\5\3\d\m\q\0\n\s\z\m\x\1\4\9\a\v\t\0\6\c\7\k\h\d\z\7\w\5\w\a\t\8\4\f\1\2\3\b\u\s\b\2\v\0\j\b\1\0\s\8\6\r\s\9\e\y\x\r\r\j\g\3\2\i\f\1\v\o\t\9\z\k\g\w\q\s\x\8\7\3\9\l\7\k\f\s\o\g\2\9\e\m\8\c\4\s\g\x\f\g\g\2\j\v\4\b\y\a\q\9\u\l\v\q\i\6\e\z\6\r\b\3\u\h\j\v\d\p\k\7\3\6\8\o\5\y\r\9\3\a\u\k\q\o\o\m\p\x\s\f\a\4\l\t\e\9\a\w\3\n\h\2\p\f\i\4\q\t\e\z\t\l\i\2\a\w\n\5\r\3\c\o\0\f\5\k\1\p\u\m\t\p\p\m\j\s\q\r\b\m\e\m\4\r\8\9\w\b\7\3\c\l\z\z\8\e\9\c\m\d\t\6\u\t\l\l\g\c\e\e\7\c\0\7\y\5\p\m\b\f\2\7\m\1\m\m\k\b\f\8\z\s\s\5\v\v\p\n\v\3\w\h\6\i\f\5\9\f\4\h\t\e\u\6\4\r\8\n\9\v\p\3\9\4\y\a\z\l\o\3\q\e\d\v\1\e\x\8\8\0\3\6\b\s\s\i\i\i\2\f\e\v\k\d\r\a\d\c\j\f\1\u\i\7\b\g\g\5\6\3\4\5\1\3\w\e\h\y\0\7\1\e\y\t\u\4\m\7\d\8\t\1\c\8\a\5\m\y\e\9\d\o\e\f\s\o\4\v\x\y\n\7\3\v\7\s\w\q\4\n\h\c\8\u\w\5\i\7\z\d\y\b\1\0\1\v\n\i\r\a\7\2\q\0\8\0\p\u\9\q\1\w\p\v\6\i\i\t\4\q\o\j\h\v\y\m\6\5\i\8\3\t\z\f\g\i\u\k\s\v\u\k\c\c\v\b\k\8\n\u\q\a\h\c\r\c\c\o\7\f\f\z\i\7\s\t\c\1\z\r\a\6\r\b\1\d\p\7\7\4\6\k\0\0\s\5\u\0\1\q\k\6\a\4\4\8\g\m\h\9\z\b\k\s\t\4\i\u\7\r\t\3\g\6\e\b\3\k\b\j\o\2\l\2\u\u\8\s\n\d\k\a\3\b\q\b\8\x\3\c\n\x\s\x\q\e\x\t\t\z\u\0\g\q\1\3\k\s\x\h\j\1\i\4\7\k\i\5\r\v\3\l\y\o\m\i\p\h\c\m\2\c\1\q\o\g\c\j\8\q\4\v\a\5\r\n\k\o\3\l\x\k\r\8 ]] 00:17:54.398 00:17:54.398 real 0m4.425s 00:17:54.398 user 0m3.667s 00:17:54.398 sys 0m0.481s 00:17:54.398 20:44:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.398 20:44:37 -- common/autotest_common.sh@10 -- # set +x 00:17:54.398 20:44:37 -- dd/basic_rw.sh@1 -- # cleanup 00:17:54.398 20:44:37 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:17:54.398 20:44:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:17:54.398 20:44:37 -- dd/common.sh@11 -- # local nvme_ref= 00:17:54.398 20:44:37 -- dd/common.sh@12 -- # local size=0xffff 00:17:54.398 20:44:37 -- dd/common.sh@14 -- # local bs=1048576 00:17:54.398 20:44:37 -- dd/common.sh@15 -- # local count=1 00:17:54.398 20:44:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:17:54.398 20:44:37 -- dd/common.sh@18 -- # gen_conf 00:17:54.398 20:44:37 -- dd/common.sh@31 -- # xtrace_disable 00:17:54.398 20:44:37 -- common/autotest_common.sh@10 -- # set +x 00:17:54.398 { 00:17:54.398 "subsystems": [ 00:17:54.398 { 00:17:54.398 "subsystem": "bdev", 00:17:54.398 "config": [ 00:17:54.398 { 00:17:54.398 "params": { 00:17:54.398 "trtype": "pcie", 00:17:54.398 "name": "Nvme0", 00:17:54.398 "traddr": "0000:00:06.0" 00:17:54.398 }, 00:17:54.398 "method": "bdev_nvme_attach_controller" 00:17:54.398 }, 00:17:54.398 { 00:17:54.398 "method": "bdev_wait_for_examine" 00:17:54.398 } 00:17:54.398 ] 00:17:54.398 } 00:17:54.398 ] 00:17:54.398 } 00:17:54.658 [2024-04-15 20:44:37.900561] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:54.658 [2024-04-15 20:44:37.900958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58193 ] 00:17:54.658 [2024-04-15 20:44:38.065755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.917 [2024-04-15 20:44:38.253684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.554  Copying: 1024/1024 [kB] (average 1000 MBps) 00:17:56.554 00:17:56.554 20:44:39 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:56.554 00:17:56.554 real 0m51.701s 00:17:56.554 user 0m42.742s 00:17:56.554 sys 0m5.796s 00:17:56.554 20:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.554 20:44:39 -- common/autotest_common.sh@10 -- # set +x 00:17:56.554 ************************************ 00:17:56.554 END TEST spdk_dd_basic_rw 00:17:56.554 ************************************ 00:17:56.554 20:44:39 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:17:56.554 20:44:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:56.554 20:44:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:56.555 20:44:39 -- common/autotest_common.sh@10 -- # set +x 00:17:56.555 ************************************ 00:17:56.555 START TEST spdk_dd_posix 00:17:56.555 ************************************ 00:17:56.555 20:44:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:17:56.814 * Looking for test storage... 00:17:56.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:17:56.814 20:44:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:56.814 20:44:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.814 20:44:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.814 20:44:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.814 20:44:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:56.814 20:44:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:56.814 20:44:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:56.814 20:44:40 -- paths/export.sh@5 -- # export PATH 00:17:56.814 20:44:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:17:56.814 20:44:40 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:17:56.814 20:44:40 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:17:56.814 20:44:40 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:17:56.814 20:44:40 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:17:56.814 20:44:40 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:56.814 20:44:40 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:17:56.814 20:44:40 -- dd/posix.sh@130 -- # tests 00:17:56.814 20:44:40 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:17:56.814 * First test run, using AIO 00:17:56.814 20:44:40 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:17:56.814 20:44:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:56.814 20:44:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:56.814 20:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:56.814 ************************************ 00:17:56.814 START TEST dd_flag_append 00:17:56.814 ************************************ 00:17:56.814 20:44:40 -- common/autotest_common.sh@1104 -- # append 00:17:56.814 20:44:40 -- dd/posix.sh@16 -- # local dump0 00:17:56.814 20:44:40 -- dd/posix.sh@17 -- # local dump1 00:17:56.814 20:44:40 -- dd/posix.sh@19 -- # gen_bytes 32 00:17:56.814 20:44:40 -- dd/common.sh@98 -- # xtrace_disable 00:17:56.814 20:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:56.814 20:44:40 -- dd/posix.sh@19 -- # dump0=weyl50dnkn2utuvoqeyqpmy20c2v0ea4 00:17:56.814 20:44:40 -- dd/posix.sh@20 -- # gen_bytes 32 00:17:56.814 20:44:40 -- dd/common.sh@98 -- # xtrace_disable 00:17:56.814 20:44:40 -- common/autotest_common.sh@10 -- # set +x 00:17:56.814 20:44:40 -- dd/posix.sh@20 -- # dump1=lkf8gkua0dbf17ffjqcv2y2ayjyra50s 00:17:56.814 20:44:40 -- dd/posix.sh@22 -- # printf %s weyl50dnkn2utuvoqeyqpmy20c2v0ea4 00:17:56.814 20:44:40 -- dd/posix.sh@23 -- # printf %s lkf8gkua0dbf17ffjqcv2y2ayjyra50s 00:17:56.814 20:44:40 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:17:56.814 [2024-04-15 20:44:40.237514] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:56.814 [2024-04-15 20:44:40.237812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58283 ] 00:17:57.074 [2024-04-15 20:44:40.391712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.333 [2024-04-15 20:44:40.590130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.967  Copying: 32/32 [B] (average 31 kBps) 00:17:58.967 00:17:58.967 20:44:42 -- dd/posix.sh@27 -- # [[ lkf8gkua0dbf17ffjqcv2y2ayjyra50sweyl50dnkn2utuvoqeyqpmy20c2v0ea4 == \l\k\f\8\g\k\u\a\0\d\b\f\1\7\f\f\j\q\c\v\2\y\2\a\y\j\y\r\a\5\0\s\w\e\y\l\5\0\d\n\k\n\2\u\t\u\v\o\q\e\y\q\p\m\y\2\0\c\2\v\0\e\a\4 ]] 00:17:58.967 00:17:58.967 real 0m2.065s 00:17:58.967 user 0m1.648s 00:17:58.967 sys 0m0.216s 00:17:58.967 20:44:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.967 20:44:42 -- common/autotest_common.sh@10 -- # set +x 00:17:58.967 ************************************ 00:17:58.967 END TEST dd_flag_append 00:17:58.967 ************************************ 00:17:58.967 20:44:42 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:17:58.967 20:44:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:58.967 20:44:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:58.967 20:44:42 -- common/autotest_common.sh@10 -- # set +x 00:17:58.967 ************************************ 00:17:58.967 START TEST dd_flag_directory 00:17:58.967 ************************************ 00:17:58.967 20:44:42 -- common/autotest_common.sh@1104 -- # directory 00:17:58.967 20:44:42 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:58.967 20:44:42 -- common/autotest_common.sh@640 -- # local es=0 00:17:58.967 20:44:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:58.967 20:44:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.967 20:44:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:58.967 20:44:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.967 20:44:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:58.967 20:44:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.967 20:44:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:58.967 20:44:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:58.967 20:44:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:17:58.967 20:44:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:17:58.967 [2024-04-15 20:44:42.363733] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:58.967 [2024-04-15 20:44:42.363886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58339 ] 00:17:59.225 [2024-04-15 20:44:42.510799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.225 [2024-04-15 20:44:42.707290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.790 [2024-04-15 20:44:43.053707] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:17:59.790 [2024-04-15 20:44:43.053768] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:17:59.791 [2024-04-15 20:44:43.053795] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.725 [2024-04-15 20:44:43.897793] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:18:00.984 20:44:44 -- common/autotest_common.sh@643 -- # es=236 00:18:00.984 20:44:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:00.984 20:44:44 -- common/autotest_common.sh@652 -- # es=108 00:18:00.984 20:44:44 -- common/autotest_common.sh@653 -- # case "$es" in 00:18:00.985 20:44:44 -- common/autotest_common.sh@660 -- # es=1 00:18:00.985 20:44:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:00.985 20:44:44 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:00.985 20:44:44 -- common/autotest_common.sh@640 -- # local es=0 00:18:00.985 20:44:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:00.985 20:44:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:00.985 20:44:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:00.985 20:44:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:00.985 20:44:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:00.985 20:44:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:00.985 20:44:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:00.985 20:44:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:00.985 20:44:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:00.985 20:44:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:00.985 [2024-04-15 20:44:44.420719] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:00.985 [2024-04-15 20:44:44.420894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58371 ] 00:18:01.244 [2024-04-15 20:44:44.569111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.503 [2024-04-15 20:44:44.759912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.762 [2024-04-15 20:44:45.100832] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:01.762 [2024-04-15 20:44:45.100899] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:01.762 [2024-04-15 20:44:45.100926] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:02.699 [2024-04-15 20:44:45.975521] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:18:02.958 ************************************ 00:18:02.958 END TEST dd_flag_directory 00:18:02.958 ************************************ 00:18:02.958 20:44:46 -- common/autotest_common.sh@643 -- # es=236 00:18:02.958 20:44:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:02.958 20:44:46 -- common/autotest_common.sh@652 -- # es=108 00:18:02.958 20:44:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:18:02.958 20:44:46 -- common/autotest_common.sh@660 -- # es=1 00:18:02.958 20:44:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:02.958 00:18:02.958 real 0m4.141s 00:18:02.958 user 0m3.328s 00:18:02.958 sys 0m0.420s 00:18:02.958 20:44:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.958 20:44:46 -- common/autotest_common.sh@10 -- # set +x 00:18:02.958 20:44:46 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:18:02.958 20:44:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:02.958 20:44:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:02.958 20:44:46 -- common/autotest_common.sh@10 -- # set +x 00:18:02.958 ************************************ 00:18:02.958 START TEST dd_flag_nofollow 00:18:02.958 ************************************ 00:18:02.958 20:44:46 -- common/autotest_common.sh@1104 -- # nofollow 00:18:02.958 20:44:46 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:02.958 20:44:46 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:02.958 20:44:46 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:02.958 20:44:46 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:02.958 20:44:46 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:02.958 20:44:46 -- common/autotest_common.sh@640 -- # local es=0 00:18:02.958 20:44:46 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:02.959 20:44:46 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:02.959 20:44:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:02.959 20:44:46 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:02.959 20:44:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:02.959 20:44:46 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:02.959 20:44:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:02.959 20:44:46 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:02.959 20:44:46 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:02.959 20:44:46 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:03.218 [2024-04-15 20:44:46.572091] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:03.218 [2024-04-15 20:44:46.572235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58421 ] 00:18:03.477 [2024-04-15 20:44:46.724820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.477 [2024-04-15 20:44:46.920222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.045 [2024-04-15 20:44:47.267834] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:04.045 [2024-04-15 20:44:47.267902] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:04.045 [2024-04-15 20:44:47.267927] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:04.982 [2024-04-15 20:44:48.118177] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:18:05.242 20:44:48 -- common/autotest_common.sh@643 -- # es=216 00:18:05.242 20:44:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:05.242 20:44:48 -- common/autotest_common.sh@652 -- # es=88 00:18:05.242 20:44:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:18:05.242 20:44:48 -- common/autotest_common.sh@660 -- # es=1 00:18:05.242 20:44:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:05.242 20:44:48 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:05.242 20:44:48 -- common/autotest_common.sh@640 -- # local es=0 00:18:05.242 20:44:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:05.242 20:44:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.242 20:44:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:05.242 20:44:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.242 20:44:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:05.242 20:44:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.242 20:44:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:05.242 20:44:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.242 20:44:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:05.242 20:44:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:05.242 [2024-04-15 20:44:48.647323] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:05.242 [2024-04-15 20:44:48.647486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58457 ] 00:18:05.501 [2024-04-15 20:44:48.808291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.761 [2024-04-15 20:44:49.003369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.020 [2024-04-15 20:44:49.331945] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:06.020 [2024-04-15 20:44:49.332014] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:06.020 [2024-04-15 20:44:49.332040] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:06.968 [2024-04-15 20:44:50.176441] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:18:07.227 20:44:50 -- common/autotest_common.sh@643 -- # es=216 00:18:07.227 20:44:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:07.227 20:44:50 -- common/autotest_common.sh@652 -- # es=88 00:18:07.227 20:44:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:18:07.227 20:44:50 -- common/autotest_common.sh@660 -- # es=1 00:18:07.227 20:44:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:07.227 20:44:50 -- dd/posix.sh@46 -- # gen_bytes 512 00:18:07.227 20:44:50 -- dd/common.sh@98 -- # xtrace_disable 00:18:07.227 20:44:50 -- common/autotest_common.sh@10 -- # set +x 00:18:07.227 20:44:50 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:07.227 [2024-04-15 20:44:50.699090] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:07.227 [2024-04-15 20:44:50.699244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58484 ] 00:18:07.486 [2024-04-15 20:44:50.870951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.745 [2024-04-15 20:44:51.066509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.381  Copying: 512/512 [B] (average 500 kBps) 00:18:09.381 00:18:09.381 ************************************ 00:18:09.381 END TEST dd_flag_nofollow 00:18:09.381 ************************************ 00:18:09.381 20:44:52 -- dd/posix.sh@49 -- # [[ ugqwfmzpjjkqdmkzkkr8b4me7zvzk9ye7flxamedg8i8vrdica3lre42lx84pgetota2sc4vnnihtoo99ilqrilnk2xww023bo9jhv391cf0dhgma5xc8txj7izz96m4plusl3ajlo51cjb2ufe2krzmade5w5slvevymxzzkolakfytm94ittyou1lv2zd46x0bme9m0dd7pti5lq6nkb792w95egoklzriqfou68xkna6v2p4cmua8fjb08y6vk1fthcvasg0qiidn9rn1wa2h7umcu14yi2mswy27snlr42xql04u7umr90girpg83nii6y7v3vq1bxcj7d2por15qu2lpzwrmqcun6ahipujoy1w0catugjkt6vsua36dplppefkcg9nte1pgfxihw378pojisoohs6iqryg5xlzkqv7b6869lhftrfuibqtmg1rw3kzqucm1bp6di9krvexu0zatsal14mb42411lq1bwufeg1sylvgbvtnk2kk == \u\g\q\w\f\m\z\p\j\j\k\q\d\m\k\z\k\k\r\8\b\4\m\e\7\z\v\z\k\9\y\e\7\f\l\x\a\m\e\d\g\8\i\8\v\r\d\i\c\a\3\l\r\e\4\2\l\x\8\4\p\g\e\t\o\t\a\2\s\c\4\v\n\n\i\h\t\o\o\9\9\i\l\q\r\i\l\n\k\2\x\w\w\0\2\3\b\o\9\j\h\v\3\9\1\c\f\0\d\h\g\m\a\5\x\c\8\t\x\j\7\i\z\z\9\6\m\4\p\l\u\s\l\3\a\j\l\o\5\1\c\j\b\2\u\f\e\2\k\r\z\m\a\d\e\5\w\5\s\l\v\e\v\y\m\x\z\z\k\o\l\a\k\f\y\t\m\9\4\i\t\t\y\o\u\1\l\v\2\z\d\4\6\x\0\b\m\e\9\m\0\d\d\7\p\t\i\5\l\q\6\n\k\b\7\9\2\w\9\5\e\g\o\k\l\z\r\i\q\f\o\u\6\8\x\k\n\a\6\v\2\p\4\c\m\u\a\8\f\j\b\0\8\y\6\v\k\1\f\t\h\c\v\a\s\g\0\q\i\i\d\n\9\r\n\1\w\a\2\h\7\u\m\c\u\1\4\y\i\2\m\s\w\y\2\7\s\n\l\r\4\2\x\q\l\0\4\u\7\u\m\r\9\0\g\i\r\p\g\8\3\n\i\i\6\y\7\v\3\v\q\1\b\x\c\j\7\d\2\p\o\r\1\5\q\u\2\l\p\z\w\r\m\q\c\u\n\6\a\h\i\p\u\j\o\y\1\w\0\c\a\t\u\g\j\k\t\6\v\s\u\a\3\6\d\p\l\p\p\e\f\k\c\g\9\n\t\e\1\p\g\f\x\i\h\w\3\7\8\p\o\j\i\s\o\o\h\s\6\i\q\r\y\g\5\x\l\z\k\q\v\7\b\6\8\6\9\l\h\f\t\r\f\u\i\b\q\t\m\g\1\r\w\3\k\z\q\u\c\m\1\b\p\6\d\i\9\k\r\v\e\x\u\0\z\a\t\s\a\l\1\4\m\b\4\2\4\1\1\l\q\1\b\w\u\f\e\g\1\s\y\l\v\g\b\v\t\n\k\2\k\k ]] 00:18:09.381 00:18:09.381 real 0m6.265s 00:18:09.381 user 0m5.004s 00:18:09.381 sys 0m0.665s 00:18:09.381 20:44:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.381 20:44:52 -- common/autotest_common.sh@10 -- # set +x 00:18:09.381 20:44:52 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:18:09.381 20:44:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:09.381 20:44:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:09.381 20:44:52 -- common/autotest_common.sh@10 -- # set +x 00:18:09.381 ************************************ 00:18:09.381 START TEST dd_flag_noatime 00:18:09.381 ************************************ 00:18:09.381 20:44:52 -- common/autotest_common.sh@1104 -- # noatime 00:18:09.381 20:44:52 -- dd/posix.sh@53 -- # local atime_if 00:18:09.381 20:44:52 -- dd/posix.sh@54 -- # local atime_of 00:18:09.381 20:44:52 -- dd/posix.sh@58 -- # gen_bytes 512 00:18:09.381 20:44:52 -- dd/common.sh@98 -- # xtrace_disable 00:18:09.381 20:44:52 -- common/autotest_common.sh@10 -- # set +x 00:18:09.381 20:44:52 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:09.381 20:44:52 -- dd/posix.sh@60 -- # atime_if=1713213891 00:18:09.381 20:44:52 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:09.381 20:44:52 -- dd/posix.sh@61 -- # atime_of=1713213892 00:18:09.381 20:44:52 -- dd/posix.sh@66 -- # sleep 1 00:18:10.318 20:44:53 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:10.577 [2024-04-15 20:44:53.926909] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:10.577 [2024-04-15 20:44:53.927068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58557 ] 00:18:10.852 [2024-04-15 20:44:54.102079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.852 [2024-04-15 20:44:54.311121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.799  Copying: 512/512 [B] (average 500 kBps) 00:18:12.799 00:18:12.799 20:44:55 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:12.799 20:44:55 -- dd/posix.sh@69 -- # (( atime_if == 1713213891 )) 00:18:12.799 20:44:55 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:12.799 20:44:55 -- dd/posix.sh@70 -- # (( atime_of == 1713213892 )) 00:18:12.799 20:44:55 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:12.799 [2024-04-15 20:44:56.081886] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:12.799 [2024-04-15 20:44:56.082032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58587 ] 00:18:12.799 [2024-04-15 20:44:56.250445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.058 [2024-04-15 20:44:56.449262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.695  Copying: 512/512 [B] (average 500 kBps) 00:18:14.695 00:18:14.695 20:44:58 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:14.695 ************************************ 00:18:14.695 END TEST dd_flag_noatime 00:18:14.695 ************************************ 00:18:14.695 20:44:58 -- dd/posix.sh@73 -- # (( atime_if < 1713213896 )) 00:18:14.695 00:18:14.695 real 0m5.264s 00:18:14.695 user 0m3.393s 00:18:14.695 sys 0m0.466s 00:18:14.695 20:44:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.695 20:44:58 -- common/autotest_common.sh@10 -- # set +x 00:18:14.695 20:44:58 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:18:14.695 20:44:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:14.695 20:44:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:14.695 20:44:58 -- common/autotest_common.sh@10 -- # set +x 00:18:14.695 ************************************ 00:18:14.695 START TEST dd_flags_misc 00:18:14.695 ************************************ 00:18:14.695 20:44:58 -- common/autotest_common.sh@1104 -- # io 00:18:14.695 20:44:58 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:18:14.695 20:44:58 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:18:14.695 20:44:58 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:18:14.695 20:44:58 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:14.695 20:44:58 -- dd/posix.sh@86 -- # gen_bytes 512 00:18:14.695 20:44:58 -- dd/common.sh@98 -- # xtrace_disable 00:18:14.695 20:44:58 -- common/autotest_common.sh@10 -- # set +x 00:18:14.695 20:44:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:14.695 20:44:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:14.954 [2024-04-15 20:44:58.238954] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:14.954 [2024-04-15 20:44:58.239093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58631 ] 00:18:14.954 [2024-04-15 20:44:58.392163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.306 [2024-04-15 20:44:58.575993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.952  Copying: 512/512 [B] (average 500 kBps) 00:18:16.952 00:18:16.952 20:45:00 -- dd/posix.sh@93 -- # [[ y5dp0ra6rgjnikwa3jsjcvmc7bob9whsvtpxrnhk2bcycx00xfxk6obidex71zfzar3igop11jb33v1wvy6pmdsnkzrbcin6j7i83xe6pmd38w4gaj7zxgqt5xry3js8zvgxn4tnm1wytmu1sk2529rywdsr3bzv4wcv32di0v1rvq4c1vkxy1joxey79smg7xabgg9ise1o9aqprm6qytvj3kkienq77ybmn40mmfdulg1c7zkjwbe2pp066nokglx75eh4sz1z5xlvmcnlc9s24cajoidxvu5rinx5v1sazfysa6njfewady43hddmrhwv8cor0oxkpg9bhda2yh5bs5ywl34u070jyki6te807js5my4ge62friordup4eu7ehb4wwnezrwwdje90yingom8vs1qm8pbuzwnuz3lx6kztrr0fyngavn28dr7uiuhvlrfd2hhzrmbrby70u66pz3v2ersna1iqxf9q8ljn683l2smq8xlaa2u3wwyn == \y\5\d\p\0\r\a\6\r\g\j\n\i\k\w\a\3\j\s\j\c\v\m\c\7\b\o\b\9\w\h\s\v\t\p\x\r\n\h\k\2\b\c\y\c\x\0\0\x\f\x\k\6\o\b\i\d\e\x\7\1\z\f\z\a\r\3\i\g\o\p\1\1\j\b\3\3\v\1\w\v\y\6\p\m\d\s\n\k\z\r\b\c\i\n\6\j\7\i\8\3\x\e\6\p\m\d\3\8\w\4\g\a\j\7\z\x\g\q\t\5\x\r\y\3\j\s\8\z\v\g\x\n\4\t\n\m\1\w\y\t\m\u\1\s\k\2\5\2\9\r\y\w\d\s\r\3\b\z\v\4\w\c\v\3\2\d\i\0\v\1\r\v\q\4\c\1\v\k\x\y\1\j\o\x\e\y\7\9\s\m\g\7\x\a\b\g\g\9\i\s\e\1\o\9\a\q\p\r\m\6\q\y\t\v\j\3\k\k\i\e\n\q\7\7\y\b\m\n\4\0\m\m\f\d\u\l\g\1\c\7\z\k\j\w\b\e\2\p\p\0\6\6\n\o\k\g\l\x\7\5\e\h\4\s\z\1\z\5\x\l\v\m\c\n\l\c\9\s\2\4\c\a\j\o\i\d\x\v\u\5\r\i\n\x\5\v\1\s\a\z\f\y\s\a\6\n\j\f\e\w\a\d\y\4\3\h\d\d\m\r\h\w\v\8\c\o\r\0\o\x\k\p\g\9\b\h\d\a\2\y\h\5\b\s\5\y\w\l\3\4\u\0\7\0\j\y\k\i\6\t\e\8\0\7\j\s\5\m\y\4\g\e\6\2\f\r\i\o\r\d\u\p\4\e\u\7\e\h\b\4\w\w\n\e\z\r\w\w\d\j\e\9\0\y\i\n\g\o\m\8\v\s\1\q\m\8\p\b\u\z\w\n\u\z\3\l\x\6\k\z\t\r\r\0\f\y\n\g\a\v\n\2\8\d\r\7\u\i\u\h\v\l\r\f\d\2\h\h\z\r\m\b\r\b\y\7\0\u\6\6\p\z\3\v\2\e\r\s\n\a\1\i\q\x\f\9\q\8\l\j\n\6\8\3\l\2\s\m\q\8\x\l\a\a\2\u\3\w\w\y\n ]] 00:18:16.952 20:45:00 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:16.952 20:45:00 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:16.952 [2024-04-15 20:45:00.311335] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:16.952 [2024-04-15 20:45:00.311478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58664 ] 00:18:17.212 [2024-04-15 20:45:00.472220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.212 [2024-04-15 20:45:00.662254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.745  Copying: 512/512 [B] (average 500 kBps) 00:18:18.745 00:18:19.004 20:45:02 -- dd/posix.sh@93 -- # [[ y5dp0ra6rgjnikwa3jsjcvmc7bob9whsvtpxrnhk2bcycx00xfxk6obidex71zfzar3igop11jb33v1wvy6pmdsnkzrbcin6j7i83xe6pmd38w4gaj7zxgqt5xry3js8zvgxn4tnm1wytmu1sk2529rywdsr3bzv4wcv32di0v1rvq4c1vkxy1joxey79smg7xabgg9ise1o9aqprm6qytvj3kkienq77ybmn40mmfdulg1c7zkjwbe2pp066nokglx75eh4sz1z5xlvmcnlc9s24cajoidxvu5rinx5v1sazfysa6njfewady43hddmrhwv8cor0oxkpg9bhda2yh5bs5ywl34u070jyki6te807js5my4ge62friordup4eu7ehb4wwnezrwwdje90yingom8vs1qm8pbuzwnuz3lx6kztrr0fyngavn28dr7uiuhvlrfd2hhzrmbrby70u66pz3v2ersna1iqxf9q8ljn683l2smq8xlaa2u3wwyn == \y\5\d\p\0\r\a\6\r\g\j\n\i\k\w\a\3\j\s\j\c\v\m\c\7\b\o\b\9\w\h\s\v\t\p\x\r\n\h\k\2\b\c\y\c\x\0\0\x\f\x\k\6\o\b\i\d\e\x\7\1\z\f\z\a\r\3\i\g\o\p\1\1\j\b\3\3\v\1\w\v\y\6\p\m\d\s\n\k\z\r\b\c\i\n\6\j\7\i\8\3\x\e\6\p\m\d\3\8\w\4\g\a\j\7\z\x\g\q\t\5\x\r\y\3\j\s\8\z\v\g\x\n\4\t\n\m\1\w\y\t\m\u\1\s\k\2\5\2\9\r\y\w\d\s\r\3\b\z\v\4\w\c\v\3\2\d\i\0\v\1\r\v\q\4\c\1\v\k\x\y\1\j\o\x\e\y\7\9\s\m\g\7\x\a\b\g\g\9\i\s\e\1\o\9\a\q\p\r\m\6\q\y\t\v\j\3\k\k\i\e\n\q\7\7\y\b\m\n\4\0\m\m\f\d\u\l\g\1\c\7\z\k\j\w\b\e\2\p\p\0\6\6\n\o\k\g\l\x\7\5\e\h\4\s\z\1\z\5\x\l\v\m\c\n\l\c\9\s\2\4\c\a\j\o\i\d\x\v\u\5\r\i\n\x\5\v\1\s\a\z\f\y\s\a\6\n\j\f\e\w\a\d\y\4\3\h\d\d\m\r\h\w\v\8\c\o\r\0\o\x\k\p\g\9\b\h\d\a\2\y\h\5\b\s\5\y\w\l\3\4\u\0\7\0\j\y\k\i\6\t\e\8\0\7\j\s\5\m\y\4\g\e\6\2\f\r\i\o\r\d\u\p\4\e\u\7\e\h\b\4\w\w\n\e\z\r\w\w\d\j\e\9\0\y\i\n\g\o\m\8\v\s\1\q\m\8\p\b\u\z\w\n\u\z\3\l\x\6\k\z\t\r\r\0\f\y\n\g\a\v\n\2\8\d\r\7\u\i\u\h\v\l\r\f\d\2\h\h\z\r\m\b\r\b\y\7\0\u\6\6\p\z\3\v\2\e\r\s\n\a\1\i\q\x\f\9\q\8\l\j\n\6\8\3\l\2\s\m\q\8\x\l\a\a\2\u\3\w\w\y\n ]] 00:18:19.004 20:45:02 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:19.004 20:45:02 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:19.004 [2024-04-15 20:45:02.394297] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:19.004 [2024-04-15 20:45:02.394449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58697 ] 00:18:19.263 [2024-04-15 20:45:02.540054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.263 [2024-04-15 20:45:02.737449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.208  Copying: 512/512 [B] (average 166 kBps) 00:18:21.208 00:18:21.208 20:45:04 -- dd/posix.sh@93 -- # [[ y5dp0ra6rgjnikwa3jsjcvmc7bob9whsvtpxrnhk2bcycx00xfxk6obidex71zfzar3igop11jb33v1wvy6pmdsnkzrbcin6j7i83xe6pmd38w4gaj7zxgqt5xry3js8zvgxn4tnm1wytmu1sk2529rywdsr3bzv4wcv32di0v1rvq4c1vkxy1joxey79smg7xabgg9ise1o9aqprm6qytvj3kkienq77ybmn40mmfdulg1c7zkjwbe2pp066nokglx75eh4sz1z5xlvmcnlc9s24cajoidxvu5rinx5v1sazfysa6njfewady43hddmrhwv8cor0oxkpg9bhda2yh5bs5ywl34u070jyki6te807js5my4ge62friordup4eu7ehb4wwnezrwwdje90yingom8vs1qm8pbuzwnuz3lx6kztrr0fyngavn28dr7uiuhvlrfd2hhzrmbrby70u66pz3v2ersna1iqxf9q8ljn683l2smq8xlaa2u3wwyn == \y\5\d\p\0\r\a\6\r\g\j\n\i\k\w\a\3\j\s\j\c\v\m\c\7\b\o\b\9\w\h\s\v\t\p\x\r\n\h\k\2\b\c\y\c\x\0\0\x\f\x\k\6\o\b\i\d\e\x\7\1\z\f\z\a\r\3\i\g\o\p\1\1\j\b\3\3\v\1\w\v\y\6\p\m\d\s\n\k\z\r\b\c\i\n\6\j\7\i\8\3\x\e\6\p\m\d\3\8\w\4\g\a\j\7\z\x\g\q\t\5\x\r\y\3\j\s\8\z\v\g\x\n\4\t\n\m\1\w\y\t\m\u\1\s\k\2\5\2\9\r\y\w\d\s\r\3\b\z\v\4\w\c\v\3\2\d\i\0\v\1\r\v\q\4\c\1\v\k\x\y\1\j\o\x\e\y\7\9\s\m\g\7\x\a\b\g\g\9\i\s\e\1\o\9\a\q\p\r\m\6\q\y\t\v\j\3\k\k\i\e\n\q\7\7\y\b\m\n\4\0\m\m\f\d\u\l\g\1\c\7\z\k\j\w\b\e\2\p\p\0\6\6\n\o\k\g\l\x\7\5\e\h\4\s\z\1\z\5\x\l\v\m\c\n\l\c\9\s\2\4\c\a\j\o\i\d\x\v\u\5\r\i\n\x\5\v\1\s\a\z\f\y\s\a\6\n\j\f\e\w\a\d\y\4\3\h\d\d\m\r\h\w\v\8\c\o\r\0\o\x\k\p\g\9\b\h\d\a\2\y\h\5\b\s\5\y\w\l\3\4\u\0\7\0\j\y\k\i\6\t\e\8\0\7\j\s\5\m\y\4\g\e\6\2\f\r\i\o\r\d\u\p\4\e\u\7\e\h\b\4\w\w\n\e\z\r\w\w\d\j\e\9\0\y\i\n\g\o\m\8\v\s\1\q\m\8\p\b\u\z\w\n\u\z\3\l\x\6\k\z\t\r\r\0\f\y\n\g\a\v\n\2\8\d\r\7\u\i\u\h\v\l\r\f\d\2\h\h\z\r\m\b\r\b\y\7\0\u\6\6\p\z\3\v\2\e\r\s\n\a\1\i\q\x\f\9\q\8\l\j\n\6\8\3\l\2\s\m\q\8\x\l\a\a\2\u\3\w\w\y\n ]] 00:18:21.208 20:45:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:21.208 20:45:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:21.208 [2024-04-15 20:45:04.467242] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:21.208 [2024-04-15 20:45:04.467394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58726 ] 00:18:21.208 [2024-04-15 20:45:04.626123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.494 [2024-04-15 20:45:04.823922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.131  Copying: 512/512 [B] (average 250 kBps) 00:18:23.131 00:18:23.131 20:45:06 -- dd/posix.sh@93 -- # [[ y5dp0ra6rgjnikwa3jsjcvmc7bob9whsvtpxrnhk2bcycx00xfxk6obidex71zfzar3igop11jb33v1wvy6pmdsnkzrbcin6j7i83xe6pmd38w4gaj7zxgqt5xry3js8zvgxn4tnm1wytmu1sk2529rywdsr3bzv4wcv32di0v1rvq4c1vkxy1joxey79smg7xabgg9ise1o9aqprm6qytvj3kkienq77ybmn40mmfdulg1c7zkjwbe2pp066nokglx75eh4sz1z5xlvmcnlc9s24cajoidxvu5rinx5v1sazfysa6njfewady43hddmrhwv8cor0oxkpg9bhda2yh5bs5ywl34u070jyki6te807js5my4ge62friordup4eu7ehb4wwnezrwwdje90yingom8vs1qm8pbuzwnuz3lx6kztrr0fyngavn28dr7uiuhvlrfd2hhzrmbrby70u66pz3v2ersna1iqxf9q8ljn683l2smq8xlaa2u3wwyn == \y\5\d\p\0\r\a\6\r\g\j\n\i\k\w\a\3\j\s\j\c\v\m\c\7\b\o\b\9\w\h\s\v\t\p\x\r\n\h\k\2\b\c\y\c\x\0\0\x\f\x\k\6\o\b\i\d\e\x\7\1\z\f\z\a\r\3\i\g\o\p\1\1\j\b\3\3\v\1\w\v\y\6\p\m\d\s\n\k\z\r\b\c\i\n\6\j\7\i\8\3\x\e\6\p\m\d\3\8\w\4\g\a\j\7\z\x\g\q\t\5\x\r\y\3\j\s\8\z\v\g\x\n\4\t\n\m\1\w\y\t\m\u\1\s\k\2\5\2\9\r\y\w\d\s\r\3\b\z\v\4\w\c\v\3\2\d\i\0\v\1\r\v\q\4\c\1\v\k\x\y\1\j\o\x\e\y\7\9\s\m\g\7\x\a\b\g\g\9\i\s\e\1\o\9\a\q\p\r\m\6\q\y\t\v\j\3\k\k\i\e\n\q\7\7\y\b\m\n\4\0\m\m\f\d\u\l\g\1\c\7\z\k\j\w\b\e\2\p\p\0\6\6\n\o\k\g\l\x\7\5\e\h\4\s\z\1\z\5\x\l\v\m\c\n\l\c\9\s\2\4\c\a\j\o\i\d\x\v\u\5\r\i\n\x\5\v\1\s\a\z\f\y\s\a\6\n\j\f\e\w\a\d\y\4\3\h\d\d\m\r\h\w\v\8\c\o\r\0\o\x\k\p\g\9\b\h\d\a\2\y\h\5\b\s\5\y\w\l\3\4\u\0\7\0\j\y\k\i\6\t\e\8\0\7\j\s\5\m\y\4\g\e\6\2\f\r\i\o\r\d\u\p\4\e\u\7\e\h\b\4\w\w\n\e\z\r\w\w\d\j\e\9\0\y\i\n\g\o\m\8\v\s\1\q\m\8\p\b\u\z\w\n\u\z\3\l\x\6\k\z\t\r\r\0\f\y\n\g\a\v\n\2\8\d\r\7\u\i\u\h\v\l\r\f\d\2\h\h\z\r\m\b\r\b\y\7\0\u\6\6\p\z\3\v\2\e\r\s\n\a\1\i\q\x\f\9\q\8\l\j\n\6\8\3\l\2\s\m\q\8\x\l\a\a\2\u\3\w\w\y\n ]] 00:18:23.131 20:45:06 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:23.131 20:45:06 -- dd/posix.sh@86 -- # gen_bytes 512 00:18:23.131 20:45:06 -- dd/common.sh@98 -- # xtrace_disable 00:18:23.131 20:45:06 -- common/autotest_common.sh@10 -- # set +x 00:18:23.131 20:45:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:23.131 20:45:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:23.131 [2024-04-15 20:45:06.626064] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:23.131 [2024-04-15 20:45:06.626218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58750 ] 00:18:23.390 [2024-04-15 20:45:06.784816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.649 [2024-04-15 20:45:06.993563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.417  Copying: 512/512 [B] (average 500 kBps) 00:18:25.417 00:18:25.417 20:45:08 -- dd/posix.sh@93 -- # [[ 0f7no69lhyv2xghuqzen51nc92ibnqjmjs0kv11lmddrqoszw919wvq8j3m65wi1qy5n8a0adedzlnkjth1ahar155s6lh09u5nrlumnp3imwkop0ta9wnwdhgwdv6pzbhq02m8kpn7zwy21k7ja667zv70tbxpasirfcx1oph8muh3c2rkcl1vy945oerujiewjvbleu6eqgprmihozynoic8hjk4o3qkbrpvo2sme1pqqik6ta1waqq0lakpz5rc78veduxz72wg0181fv5upu6vlc4evhw8thl7c2c4lqh3ycg6k5axn5r1i6448sf4j8y5fenqutlihdelq2eusthdhmntnnloyhmqrzh1m7wo5ilstr2j4p6yt6hgsfg01etriqkldo0zohy4xj695frpr8aos58v2y7wekce8a6u2y6ukkrpkhy6v0qluif7bnbf40dd0bnup8brf417ad17izglvghkh3pfjk9pt5xf9t5olchgcrti7hl0d9 == \0\f\7\n\o\6\9\l\h\y\v\2\x\g\h\u\q\z\e\n\5\1\n\c\9\2\i\b\n\q\j\m\j\s\0\k\v\1\1\l\m\d\d\r\q\o\s\z\w\9\1\9\w\v\q\8\j\3\m\6\5\w\i\1\q\y\5\n\8\a\0\a\d\e\d\z\l\n\k\j\t\h\1\a\h\a\r\1\5\5\s\6\l\h\0\9\u\5\n\r\l\u\m\n\p\3\i\m\w\k\o\p\0\t\a\9\w\n\w\d\h\g\w\d\v\6\p\z\b\h\q\0\2\m\8\k\p\n\7\z\w\y\2\1\k\7\j\a\6\6\7\z\v\7\0\t\b\x\p\a\s\i\r\f\c\x\1\o\p\h\8\m\u\h\3\c\2\r\k\c\l\1\v\y\9\4\5\o\e\r\u\j\i\e\w\j\v\b\l\e\u\6\e\q\g\p\r\m\i\h\o\z\y\n\o\i\c\8\h\j\k\4\o\3\q\k\b\r\p\v\o\2\s\m\e\1\p\q\q\i\k\6\t\a\1\w\a\q\q\0\l\a\k\p\z\5\r\c\7\8\v\e\d\u\x\z\7\2\w\g\0\1\8\1\f\v\5\u\p\u\6\v\l\c\4\e\v\h\w\8\t\h\l\7\c\2\c\4\l\q\h\3\y\c\g\6\k\5\a\x\n\5\r\1\i\6\4\4\8\s\f\4\j\8\y\5\f\e\n\q\u\t\l\i\h\d\e\l\q\2\e\u\s\t\h\d\h\m\n\t\n\n\l\o\y\h\m\q\r\z\h\1\m\7\w\o\5\i\l\s\t\r\2\j\4\p\6\y\t\6\h\g\s\f\g\0\1\e\t\r\i\q\k\l\d\o\0\z\o\h\y\4\x\j\6\9\5\f\r\p\r\8\a\o\s\5\8\v\2\y\7\w\e\k\c\e\8\a\6\u\2\y\6\u\k\k\r\p\k\h\y\6\v\0\q\l\u\i\f\7\b\n\b\f\4\0\d\d\0\b\n\u\p\8\b\r\f\4\1\7\a\d\1\7\i\z\g\l\v\g\h\k\h\3\p\f\j\k\9\p\t\5\x\f\9\t\5\o\l\c\h\g\c\r\t\i\7\h\l\0\d\9 ]] 00:18:25.417 20:45:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:25.417 20:45:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:25.417 [2024-04-15 20:45:08.835991] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:25.417 [2024-04-15 20:45:08.836151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58779 ] 00:18:25.676 [2024-04-15 20:45:08.996810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.934 [2024-04-15 20:45:09.202522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.588  Copying: 512/512 [B] (average 500 kBps) 00:18:27.588 00:18:27.588 20:45:10 -- dd/posix.sh@93 -- # [[ 0f7no69lhyv2xghuqzen51nc92ibnqjmjs0kv11lmddrqoszw919wvq8j3m65wi1qy5n8a0adedzlnkjth1ahar155s6lh09u5nrlumnp3imwkop0ta9wnwdhgwdv6pzbhq02m8kpn7zwy21k7ja667zv70tbxpasirfcx1oph8muh3c2rkcl1vy945oerujiewjvbleu6eqgprmihozynoic8hjk4o3qkbrpvo2sme1pqqik6ta1waqq0lakpz5rc78veduxz72wg0181fv5upu6vlc4evhw8thl7c2c4lqh3ycg6k5axn5r1i6448sf4j8y5fenqutlihdelq2eusthdhmntnnloyhmqrzh1m7wo5ilstr2j4p6yt6hgsfg01etriqkldo0zohy4xj695frpr8aos58v2y7wekce8a6u2y6ukkrpkhy6v0qluif7bnbf40dd0bnup8brf417ad17izglvghkh3pfjk9pt5xf9t5olchgcrti7hl0d9 == \0\f\7\n\o\6\9\l\h\y\v\2\x\g\h\u\q\z\e\n\5\1\n\c\9\2\i\b\n\q\j\m\j\s\0\k\v\1\1\l\m\d\d\r\q\o\s\z\w\9\1\9\w\v\q\8\j\3\m\6\5\w\i\1\q\y\5\n\8\a\0\a\d\e\d\z\l\n\k\j\t\h\1\a\h\a\r\1\5\5\s\6\l\h\0\9\u\5\n\r\l\u\m\n\p\3\i\m\w\k\o\p\0\t\a\9\w\n\w\d\h\g\w\d\v\6\p\z\b\h\q\0\2\m\8\k\p\n\7\z\w\y\2\1\k\7\j\a\6\6\7\z\v\7\0\t\b\x\p\a\s\i\r\f\c\x\1\o\p\h\8\m\u\h\3\c\2\r\k\c\l\1\v\y\9\4\5\o\e\r\u\j\i\e\w\j\v\b\l\e\u\6\e\q\g\p\r\m\i\h\o\z\y\n\o\i\c\8\h\j\k\4\o\3\q\k\b\r\p\v\o\2\s\m\e\1\p\q\q\i\k\6\t\a\1\w\a\q\q\0\l\a\k\p\z\5\r\c\7\8\v\e\d\u\x\z\7\2\w\g\0\1\8\1\f\v\5\u\p\u\6\v\l\c\4\e\v\h\w\8\t\h\l\7\c\2\c\4\l\q\h\3\y\c\g\6\k\5\a\x\n\5\r\1\i\6\4\4\8\s\f\4\j\8\y\5\f\e\n\q\u\t\l\i\h\d\e\l\q\2\e\u\s\t\h\d\h\m\n\t\n\n\l\o\y\h\m\q\r\z\h\1\m\7\w\o\5\i\l\s\t\r\2\j\4\p\6\y\t\6\h\g\s\f\g\0\1\e\t\r\i\q\k\l\d\o\0\z\o\h\y\4\x\j\6\9\5\f\r\p\r\8\a\o\s\5\8\v\2\y\7\w\e\k\c\e\8\a\6\u\2\y\6\u\k\k\r\p\k\h\y\6\v\0\q\l\u\i\f\7\b\n\b\f\4\0\d\d\0\b\n\u\p\8\b\r\f\4\1\7\a\d\1\7\i\z\g\l\v\g\h\k\h\3\p\f\j\k\9\p\t\5\x\f\9\t\5\o\l\c\h\g\c\r\t\i\7\h\l\0\d\9 ]] 00:18:27.588 20:45:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:27.588 20:45:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:27.588 [2024-04-15 20:45:11.046341] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:27.588 [2024-04-15 20:45:11.046491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58808 ] 00:18:27.847 [2024-04-15 20:45:11.215403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.106 [2024-04-15 20:45:11.405556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.740  Copying: 512/512 [B] (average 250 kBps) 00:18:29.740 00:18:29.740 20:45:12 -- dd/posix.sh@93 -- # [[ 0f7no69lhyv2xghuqzen51nc92ibnqjmjs0kv11lmddrqoszw919wvq8j3m65wi1qy5n8a0adedzlnkjth1ahar155s6lh09u5nrlumnp3imwkop0ta9wnwdhgwdv6pzbhq02m8kpn7zwy21k7ja667zv70tbxpasirfcx1oph8muh3c2rkcl1vy945oerujiewjvbleu6eqgprmihozynoic8hjk4o3qkbrpvo2sme1pqqik6ta1waqq0lakpz5rc78veduxz72wg0181fv5upu6vlc4evhw8thl7c2c4lqh3ycg6k5axn5r1i6448sf4j8y5fenqutlihdelq2eusthdhmntnnloyhmqrzh1m7wo5ilstr2j4p6yt6hgsfg01etriqkldo0zohy4xj695frpr8aos58v2y7wekce8a6u2y6ukkrpkhy6v0qluif7bnbf40dd0bnup8brf417ad17izglvghkh3pfjk9pt5xf9t5olchgcrti7hl0d9 == \0\f\7\n\o\6\9\l\h\y\v\2\x\g\h\u\q\z\e\n\5\1\n\c\9\2\i\b\n\q\j\m\j\s\0\k\v\1\1\l\m\d\d\r\q\o\s\z\w\9\1\9\w\v\q\8\j\3\m\6\5\w\i\1\q\y\5\n\8\a\0\a\d\e\d\z\l\n\k\j\t\h\1\a\h\a\r\1\5\5\s\6\l\h\0\9\u\5\n\r\l\u\m\n\p\3\i\m\w\k\o\p\0\t\a\9\w\n\w\d\h\g\w\d\v\6\p\z\b\h\q\0\2\m\8\k\p\n\7\z\w\y\2\1\k\7\j\a\6\6\7\z\v\7\0\t\b\x\p\a\s\i\r\f\c\x\1\o\p\h\8\m\u\h\3\c\2\r\k\c\l\1\v\y\9\4\5\o\e\r\u\j\i\e\w\j\v\b\l\e\u\6\e\q\g\p\r\m\i\h\o\z\y\n\o\i\c\8\h\j\k\4\o\3\q\k\b\r\p\v\o\2\s\m\e\1\p\q\q\i\k\6\t\a\1\w\a\q\q\0\l\a\k\p\z\5\r\c\7\8\v\e\d\u\x\z\7\2\w\g\0\1\8\1\f\v\5\u\p\u\6\v\l\c\4\e\v\h\w\8\t\h\l\7\c\2\c\4\l\q\h\3\y\c\g\6\k\5\a\x\n\5\r\1\i\6\4\4\8\s\f\4\j\8\y\5\f\e\n\q\u\t\l\i\h\d\e\l\q\2\e\u\s\t\h\d\h\m\n\t\n\n\l\o\y\h\m\q\r\z\h\1\m\7\w\o\5\i\l\s\t\r\2\j\4\p\6\y\t\6\h\g\s\f\g\0\1\e\t\r\i\q\k\l\d\o\0\z\o\h\y\4\x\j\6\9\5\f\r\p\r\8\a\o\s\5\8\v\2\y\7\w\e\k\c\e\8\a\6\u\2\y\6\u\k\k\r\p\k\h\y\6\v\0\q\l\u\i\f\7\b\n\b\f\4\0\d\d\0\b\n\u\p\8\b\r\f\4\1\7\a\d\1\7\i\z\g\l\v\g\h\k\h\3\p\f\j\k\9\p\t\5\x\f\9\t\5\o\l\c\h\g\c\r\t\i\7\h\l\0\d\9 ]] 00:18:29.740 20:45:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:29.740 20:45:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:29.740 [2024-04-15 20:45:13.111361] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:29.740 [2024-04-15 20:45:13.111511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58841 ] 00:18:29.999 [2024-04-15 20:45:13.268049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.999 [2024-04-15 20:45:13.466768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.992  Copying: 512/512 [B] (average 166 kBps) 00:18:31.992 00:18:31.992 20:45:15 -- dd/posix.sh@93 -- # [[ 0f7no69lhyv2xghuqzen51nc92ibnqjmjs0kv11lmddrqoszw919wvq8j3m65wi1qy5n8a0adedzlnkjth1ahar155s6lh09u5nrlumnp3imwkop0ta9wnwdhgwdv6pzbhq02m8kpn7zwy21k7ja667zv70tbxpasirfcx1oph8muh3c2rkcl1vy945oerujiewjvbleu6eqgprmihozynoic8hjk4o3qkbrpvo2sme1pqqik6ta1waqq0lakpz5rc78veduxz72wg0181fv5upu6vlc4evhw8thl7c2c4lqh3ycg6k5axn5r1i6448sf4j8y5fenqutlihdelq2eusthdhmntnnloyhmqrzh1m7wo5ilstr2j4p6yt6hgsfg01etriqkldo0zohy4xj695frpr8aos58v2y7wekce8a6u2y6ukkrpkhy6v0qluif7bnbf40dd0bnup8brf417ad17izglvghkh3pfjk9pt5xf9t5olchgcrti7hl0d9 == \0\f\7\n\o\6\9\l\h\y\v\2\x\g\h\u\q\z\e\n\5\1\n\c\9\2\i\b\n\q\j\m\j\s\0\k\v\1\1\l\m\d\d\r\q\o\s\z\w\9\1\9\w\v\q\8\j\3\m\6\5\w\i\1\q\y\5\n\8\a\0\a\d\e\d\z\l\n\k\j\t\h\1\a\h\a\r\1\5\5\s\6\l\h\0\9\u\5\n\r\l\u\m\n\p\3\i\m\w\k\o\p\0\t\a\9\w\n\w\d\h\g\w\d\v\6\p\z\b\h\q\0\2\m\8\k\p\n\7\z\w\y\2\1\k\7\j\a\6\6\7\z\v\7\0\t\b\x\p\a\s\i\r\f\c\x\1\o\p\h\8\m\u\h\3\c\2\r\k\c\l\1\v\y\9\4\5\o\e\r\u\j\i\e\w\j\v\b\l\e\u\6\e\q\g\p\r\m\i\h\o\z\y\n\o\i\c\8\h\j\k\4\o\3\q\k\b\r\p\v\o\2\s\m\e\1\p\q\q\i\k\6\t\a\1\w\a\q\q\0\l\a\k\p\z\5\r\c\7\8\v\e\d\u\x\z\7\2\w\g\0\1\8\1\f\v\5\u\p\u\6\v\l\c\4\e\v\h\w\8\t\h\l\7\c\2\c\4\l\q\h\3\y\c\g\6\k\5\a\x\n\5\r\1\i\6\4\4\8\s\f\4\j\8\y\5\f\e\n\q\u\t\l\i\h\d\e\l\q\2\e\u\s\t\h\d\h\m\n\t\n\n\l\o\y\h\m\q\r\z\h\1\m\7\w\o\5\i\l\s\t\r\2\j\4\p\6\y\t\6\h\g\s\f\g\0\1\e\t\r\i\q\k\l\d\o\0\z\o\h\y\4\x\j\6\9\5\f\r\p\r\8\a\o\s\5\8\v\2\y\7\w\e\k\c\e\8\a\6\u\2\y\6\u\k\k\r\p\k\h\y\6\v\0\q\l\u\i\f\7\b\n\b\f\4\0\d\d\0\b\n\u\p\8\b\r\f\4\1\7\a\d\1\7\i\z\g\l\v\g\h\k\h\3\p\f\j\k\9\p\t\5\x\f\9\t\5\o\l\c\h\g\c\r\t\i\7\h\l\0\d\9 ]] 00:18:31.992 00:18:31.992 real 0m17.023s 00:18:31.993 user 0m13.701s 00:18:31.993 sys 0m1.688s 00:18:31.993 ************************************ 00:18:31.993 END TEST dd_flags_misc 00:18:31.993 ************************************ 00:18:31.993 20:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.993 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:18:31.993 20:45:15 -- dd/posix.sh@131 -- # tests_forced_aio 00:18:31.993 20:45:15 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:18:31.993 * Second test run, using AIO 00:18:31.993 20:45:15 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:18:31.993 20:45:15 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:18:31.993 20:45:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:31.993 20:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:31.993 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:18:31.993 ************************************ 00:18:31.993 START TEST dd_flag_append_forced_aio 00:18:31.993 ************************************ 00:18:31.993 20:45:15 -- common/autotest_common.sh@1104 -- # append 00:18:31.993 20:45:15 -- dd/posix.sh@16 -- # local dump0 00:18:31.993 20:45:15 -- dd/posix.sh@17 -- # local dump1 00:18:31.993 20:45:15 -- dd/posix.sh@19 -- # gen_bytes 32 00:18:31.993 20:45:15 -- dd/common.sh@98 -- # xtrace_disable 00:18:31.993 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:18:31.993 20:45:15 -- dd/posix.sh@19 -- # dump0=wsupz33v55k5wgxoiz7nqfgzkithf4wc 00:18:31.993 20:45:15 -- dd/posix.sh@20 -- # gen_bytes 32 00:18:31.993 20:45:15 -- dd/common.sh@98 -- # xtrace_disable 00:18:31.993 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:18:31.993 20:45:15 -- dd/posix.sh@20 -- # dump1=bst46fvscxqe01uusbu56dbfwmyv0iu0 00:18:31.993 20:45:15 -- dd/posix.sh@22 -- # printf %s wsupz33v55k5wgxoiz7nqfgzkithf4wc 00:18:31.993 20:45:15 -- dd/posix.sh@23 -- # printf %s bst46fvscxqe01uusbu56dbfwmyv0iu0 00:18:31.993 20:45:15 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:18:31.993 [2024-04-15 20:45:15.317826] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:31.993 [2024-04-15 20:45:15.317977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58898 ] 00:18:31.993 [2024-04-15 20:45:15.473937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.251 [2024-04-15 20:45:15.666099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.197  Copying: 32/32 [B] (average 31 kBps) 00:18:34.197 00:18:34.197 ************************************ 00:18:34.197 END TEST dd_flag_append_forced_aio 00:18:34.197 ************************************ 00:18:34.197 20:45:17 -- dd/posix.sh@27 -- # [[ bst46fvscxqe01uusbu56dbfwmyv0iu0wsupz33v55k5wgxoiz7nqfgzkithf4wc == \b\s\t\4\6\f\v\s\c\x\q\e\0\1\u\u\s\b\u\5\6\d\b\f\w\m\y\v\0\i\u\0\w\s\u\p\z\3\3\v\5\5\k\5\w\g\x\o\i\z\7\n\q\f\g\z\k\i\t\h\f\4\w\c ]] 00:18:34.197 00:18:34.197 real 0m2.112s 00:18:34.197 user 0m1.697s 00:18:34.197 sys 0m0.215s 00:18:34.197 20:45:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.197 20:45:17 -- common/autotest_common.sh@10 -- # set +x 00:18:34.197 20:45:17 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:18:34.197 20:45:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:34.197 20:45:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:34.197 20:45:17 -- common/autotest_common.sh@10 -- # set +x 00:18:34.197 ************************************ 00:18:34.197 START TEST dd_flag_directory_forced_aio 00:18:34.197 ************************************ 00:18:34.197 20:45:17 -- common/autotest_common.sh@1104 -- # directory 00:18:34.197 20:45:17 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:34.197 20:45:17 -- common/autotest_common.sh@640 -- # local es=0 00:18:34.197 20:45:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:34.197 20:45:17 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.197 20:45:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:34.197 20:45:17 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.197 20:45:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:34.197 20:45:17 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.197 20:45:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:34.197 20:45:17 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:34.197 20:45:17 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:34.197 20:45:17 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:34.197 [2024-04-15 20:45:17.486184] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:34.197 [2024-04-15 20:45:17.486331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58952 ] 00:18:34.197 [2024-04-15 20:45:17.640126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.456 [2024-04-15 20:45:17.832379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.714 [2024-04-15 20:45:18.179090] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:34.714 [2024-04-15 20:45:18.179160] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:34.714 [2024-04-15 20:45:18.179186] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:35.648 [2024-04-15 20:45:19.062360] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:18:36.213 20:45:19 -- common/autotest_common.sh@643 -- # es=236 00:18:36.213 20:45:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:36.213 20:45:19 -- common/autotest_common.sh@652 -- # es=108 00:18:36.213 20:45:19 -- common/autotest_common.sh@653 -- # case "$es" in 00:18:36.213 20:45:19 -- common/autotest_common.sh@660 -- # es=1 00:18:36.213 20:45:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:36.213 20:45:19 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:36.213 20:45:19 -- common/autotest_common.sh@640 -- # local es=0 00:18:36.213 20:45:19 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:36.213 20:45:19 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.213 20:45:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:36.213 20:45:19 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.213 20:45:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:36.213 20:45:19 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.213 20:45:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:36.213 20:45:19 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.213 20:45:19 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:36.213 20:45:19 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:18:36.213 [2024-04-15 20:45:19.593435] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:36.213 [2024-04-15 20:45:19.593613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58979 ] 00:18:36.471 [2024-04-15 20:45:19.742321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.471 [2024-04-15 20:45:19.939244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.038 [2024-04-15 20:45:20.276149] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:37.038 [2024-04-15 20:45:20.276218] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:18:37.038 [2024-04-15 20:45:20.276245] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:37.976 [2024-04-15 20:45:21.135795] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:18:38.236 ************************************ 00:18:38.236 END TEST dd_flag_directory_forced_aio 00:18:38.236 ************************************ 00:18:38.236 20:45:21 -- common/autotest_common.sh@643 -- # es=236 00:18:38.236 20:45:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:38.236 20:45:21 -- common/autotest_common.sh@652 -- # es=108 00:18:38.236 20:45:21 -- common/autotest_common.sh@653 -- # case "$es" in 00:18:38.236 20:45:21 -- common/autotest_common.sh@660 -- # es=1 00:18:38.236 20:45:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:38.236 00:18:38.236 real 0m4.171s 00:18:38.236 user 0m3.372s 00:18:38.236 sys 0m0.401s 00:18:38.236 20:45:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.236 20:45:21 -- common/autotest_common.sh@10 -- # set +x 00:18:38.236 20:45:21 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:18:38.236 20:45:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:38.236 20:45:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:38.236 20:45:21 -- common/autotest_common.sh@10 -- # set +x 00:18:38.236 ************************************ 00:18:38.236 START TEST dd_flag_nofollow_forced_aio 00:18:38.236 ************************************ 00:18:38.236 20:45:21 -- common/autotest_common.sh@1104 -- # nofollow 00:18:38.236 20:45:21 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:38.236 20:45:21 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:38.236 20:45:21 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:18:38.236 20:45:21 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:18:38.236 20:45:21 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:38.236 20:45:21 -- common/autotest_common.sh@640 -- # local es=0 00:18:38.236 20:45:21 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:38.236 20:45:21 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:38.236 20:45:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:38.236 20:45:21 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:38.236 20:45:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:38.236 20:45:21 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:38.236 20:45:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:38.236 20:45:21 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:38.236 20:45:21 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:38.236 20:45:21 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:38.236 [2024-04-15 20:45:21.730418] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:38.236 [2024-04-15 20:45:21.730580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ] 00:18:38.497 [2024-04-15 20:45:21.895542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.756 [2024-04-15 20:45:22.092429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.013 [2024-04-15 20:45:22.447928] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:39.013 [2024-04-15 20:45:22.448001] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:18:39.013 [2024-04-15 20:45:22.448029] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:39.950 [2024-04-15 20:45:23.292932] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:18:40.210 20:45:23 -- common/autotest_common.sh@643 -- # es=216 00:18:40.210 20:45:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:40.210 20:45:23 -- common/autotest_common.sh@652 -- # es=88 00:18:40.210 20:45:23 -- common/autotest_common.sh@653 -- # case "$es" in 00:18:40.210 20:45:23 -- common/autotest_common.sh@660 -- # es=1 00:18:40.210 20:45:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:40.210 20:45:23 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:40.210 20:45:23 -- common/autotest_common.sh@640 -- # local es=0 00:18:40.210 20:45:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:40.210 20:45:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:40.210 20:45:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:40.210 20:45:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:40.210 20:45:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:40.210 20:45:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:40.210 20:45:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:40.210 20:45:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:40.210 20:45:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:18:40.210 20:45:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:18:40.469 [2024-04-15 20:45:23.816243] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:40.469 [2024-04-15 20:45:23.816410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59066 ] 00:18:40.728 [2024-04-15 20:45:23.970127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.728 [2024-04-15 20:45:24.161787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.297 [2024-04-15 20:45:24.507734] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:41.297 [2024-04-15 20:45:24.507804] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:18:41.297 [2024-04-15 20:45:24.507846] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:42.235 [2024-04-15 20:45:25.372893] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:18:42.494 20:45:25 -- common/autotest_common.sh@643 -- # es=216 00:18:42.494 20:45:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:42.494 20:45:25 -- common/autotest_common.sh@652 -- # es=88 00:18:42.494 20:45:25 -- common/autotest_common.sh@653 -- # case "$es" in 00:18:42.494 20:45:25 -- common/autotest_common.sh@660 -- # es=1 00:18:42.494 20:45:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:42.494 20:45:25 -- dd/posix.sh@46 -- # gen_bytes 512 00:18:42.494 20:45:25 -- dd/common.sh@98 -- # xtrace_disable 00:18:42.494 20:45:25 -- common/autotest_common.sh@10 -- # set +x 00:18:42.494 20:45:25 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:42.494 [2024-04-15 20:45:25.903382] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:42.494 [2024-04-15 20:45:25.903536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59092 ] 00:18:42.754 [2024-04-15 20:45:26.081829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.014 [2024-04-15 20:45:26.275450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.651  Copying: 512/512 [B] (average 500 kBps) 00:18:44.651 00:18:44.651 20:45:27 -- dd/posix.sh@49 -- # [[ f4e025yaxkqt105go2e3ozo44s3bdm2240w0mmgxa1fm67wm5fssdgqfp97p0tbq3nmys3r43032zr3shye19qp04fev2pnzb92h3zquj1lffkee7mfmmhebvwifdrfw00dwcnav4rj6ghuurgusdo6dk55u5xq85fapk7jdpenucvc0elic2ui6zgev4yaqeynllnm3p8bwakpjfbl1ho8somuk58sv30mxu790ykgn2hnhmui7h879u1d0xsa47wrj4v71zlpavt6f3p2yfpmg9vgq8uwfyst44hina8dow4qsf10hm8zhdam5wygwwlj50r87d76bzv7wa3hbzi9jzlkcb4x04yyvjt904w9952q8dy0p6v87f0ysfl7brujakh1pksatc5uqzael5ytjq4m8f9c60wfwdf10mxntug030q4u19s5xgsgxb3hrh3ijrzacby7tq8o8rxzcdrs1sw2w8omhu4kpaghuxgn2wzc7cu1s32gy1s35p0j == \f\4\e\0\2\5\y\a\x\k\q\t\1\0\5\g\o\2\e\3\o\z\o\4\4\s\3\b\d\m\2\2\4\0\w\0\m\m\g\x\a\1\f\m\6\7\w\m\5\f\s\s\d\g\q\f\p\9\7\p\0\t\b\q\3\n\m\y\s\3\r\4\3\0\3\2\z\r\3\s\h\y\e\1\9\q\p\0\4\f\e\v\2\p\n\z\b\9\2\h\3\z\q\u\j\1\l\f\f\k\e\e\7\m\f\m\m\h\e\b\v\w\i\f\d\r\f\w\0\0\d\w\c\n\a\v\4\r\j\6\g\h\u\u\r\g\u\s\d\o\6\d\k\5\5\u\5\x\q\8\5\f\a\p\k\7\j\d\p\e\n\u\c\v\c\0\e\l\i\c\2\u\i\6\z\g\e\v\4\y\a\q\e\y\n\l\l\n\m\3\p\8\b\w\a\k\p\j\f\b\l\1\h\o\8\s\o\m\u\k\5\8\s\v\3\0\m\x\u\7\9\0\y\k\g\n\2\h\n\h\m\u\i\7\h\8\7\9\u\1\d\0\x\s\a\4\7\w\r\j\4\v\7\1\z\l\p\a\v\t\6\f\3\p\2\y\f\p\m\g\9\v\g\q\8\u\w\f\y\s\t\4\4\h\i\n\a\8\d\o\w\4\q\s\f\1\0\h\m\8\z\h\d\a\m\5\w\y\g\w\w\l\j\5\0\r\8\7\d\7\6\b\z\v\7\w\a\3\h\b\z\i\9\j\z\l\k\c\b\4\x\0\4\y\y\v\j\t\9\0\4\w\9\9\5\2\q\8\d\y\0\p\6\v\8\7\f\0\y\s\f\l\7\b\r\u\j\a\k\h\1\p\k\s\a\t\c\5\u\q\z\a\e\l\5\y\t\j\q\4\m\8\f\9\c\6\0\w\f\w\d\f\1\0\m\x\n\t\u\g\0\3\0\q\4\u\1\9\s\5\x\g\s\g\x\b\3\h\r\h\3\i\j\r\z\a\c\b\y\7\t\q\8\o\8\r\x\z\c\d\r\s\1\s\w\2\w\8\o\m\h\u\4\k\p\a\g\h\u\x\g\n\2\w\z\c\7\c\u\1\s\3\2\g\y\1\s\3\5\p\0\j ]] 00:18:44.651 00:18:44.651 real 0m6.325s 00:18:44.651 user 0m5.092s 00:18:44.651 sys 0m0.637s 00:18:44.651 20:45:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.651 20:45:27 -- common/autotest_common.sh@10 -- # set +x 00:18:44.651 ************************************ 00:18:44.651 END TEST dd_flag_nofollow_forced_aio 00:18:44.651 ************************************ 00:18:44.651 20:45:27 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:18:44.651 20:45:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:44.651 20:45:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:44.651 20:45:27 -- common/autotest_common.sh@10 -- # set +x 00:18:44.651 ************************************ 00:18:44.651 START TEST dd_flag_noatime_forced_aio 00:18:44.651 ************************************ 00:18:44.651 20:45:27 -- common/autotest_common.sh@1104 -- # noatime 00:18:44.651 20:45:27 -- dd/posix.sh@53 -- # local atime_if 00:18:44.651 20:45:27 -- dd/posix.sh@54 -- # local atime_of 00:18:44.651 20:45:27 -- dd/posix.sh@58 -- # gen_bytes 512 00:18:44.651 20:45:27 -- dd/common.sh@98 -- # xtrace_disable 00:18:44.651 20:45:27 -- common/autotest_common.sh@10 -- # set +x 00:18:44.651 20:45:27 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:44.651 20:45:27 -- dd/posix.sh@60 -- # atime_if=1713213926 00:18:44.651 20:45:27 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:44.651 20:45:27 -- dd/posix.sh@61 -- # atime_of=1713213927 00:18:44.651 20:45:27 -- dd/posix.sh@66 -- # sleep 1 00:18:45.589 20:45:28 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:45.873 [2024-04-15 20:45:29.139299] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:45.874 [2024-04-15 20:45:29.139458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59157 ] 00:18:45.874 [2024-04-15 20:45:29.294160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.133 [2024-04-15 20:45:29.488140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.762  Copying: 512/512 [B] (average 500 kBps) 00:18:47.762 00:18:47.762 20:45:31 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:47.762 20:45:31 -- dd/posix.sh@69 -- # (( atime_if == 1713213926 )) 00:18:47.762 20:45:31 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:47.762 20:45:31 -- dd/posix.sh@70 -- # (( atime_of == 1713213927 )) 00:18:47.762 20:45:31 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:18:47.762 [2024-04-15 20:45:31.219040] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:47.763 [2024-04-15 20:45:31.219194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59188 ] 00:18:48.021 [2024-04-15 20:45:31.393006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.279 [2024-04-15 20:45:31.590136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.913  Copying: 512/512 [B] (average 500 kBps) 00:18:49.913 00:18:49.913 20:45:33 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:18:49.913 ************************************ 00:18:49.913 END TEST dd_flag_noatime_forced_aio 00:18:49.913 ************************************ 00:18:49.913 20:45:33 -- dd/posix.sh@73 -- # (( atime_if < 1713213931 )) 00:18:49.913 00:18:49.913 real 0m5.222s 00:18:49.913 user 0m3.375s 00:18:49.913 sys 0m0.448s 00:18:49.913 20:45:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.913 20:45:33 -- common/autotest_common.sh@10 -- # set +x 00:18:49.913 20:45:33 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:18:49.913 20:45:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:49.913 20:45:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:49.913 20:45:33 -- common/autotest_common.sh@10 -- # set +x 00:18:49.913 ************************************ 00:18:49.913 START TEST dd_flags_misc_forced_aio 00:18:49.913 ************************************ 00:18:49.913 20:45:33 -- common/autotest_common.sh@1104 -- # io 00:18:49.913 20:45:33 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:18:49.913 20:45:33 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:18:49.913 20:45:33 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:18:49.913 20:45:33 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:49.913 20:45:33 -- dd/posix.sh@86 -- # gen_bytes 512 00:18:49.913 20:45:33 -- dd/common.sh@98 -- # xtrace_disable 00:18:49.913 20:45:33 -- common/autotest_common.sh@10 -- # set +x 00:18:49.913 20:45:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:49.913 20:45:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:50.172 [2024-04-15 20:45:33.415998] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:50.172 [2024-04-15 20:45:33.416146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59239 ] 00:18:50.172 [2024-04-15 20:45:33.569711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.430 [2024-04-15 20:45:33.768367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.070  Copying: 512/512 [B] (average 500 kBps) 00:18:52.070 00:18:52.071 20:45:35 -- dd/posix.sh@93 -- # [[ o8kqsqdggaj3py8bw1bpxxy3o5flc1x07ccomogaa7zgdzbyo2b6hx9t9e8q2er0rwwt67q6zcbtlb14nvak1ings57e7jo166mvfo0av8nme2w53p38c0ocgt92nbngoxd53dftd5rj2ucy6pfxuk1erngl9uhbllzq7qbh1lgo6t8yf6zrcgh9mtb4um4svq6jko8cep7lewec5sorw4gejg88mlc4y6gtp5nrdpxb5ukkyx5bj3o57aix3efcdtxa3f8v0cmqwma3kgsln00tlp013zw1es2307gkx32s5epng0k5bzfphdepllz2xf1xhpa4etsaq9kfig5e7fltoxnpjswhbgjcfhfam75k0zsrxnf7guakhn88b09pozxlnhmk8mml85azeav7qnuleq3330r1fx1uslschiyn04saw0ahczlxmbm00se8t2opce0p7ph2ropx4o8byldl4yko64tp61eeitsl8dt1r8t8n2az2dxpmobexiez == \o\8\k\q\s\q\d\g\g\a\j\3\p\y\8\b\w\1\b\p\x\x\y\3\o\5\f\l\c\1\x\0\7\c\c\o\m\o\g\a\a\7\z\g\d\z\b\y\o\2\b\6\h\x\9\t\9\e\8\q\2\e\r\0\r\w\w\t\6\7\q\6\z\c\b\t\l\b\1\4\n\v\a\k\1\i\n\g\s\5\7\e\7\j\o\1\6\6\m\v\f\o\0\a\v\8\n\m\e\2\w\5\3\p\3\8\c\0\o\c\g\t\9\2\n\b\n\g\o\x\d\5\3\d\f\t\d\5\r\j\2\u\c\y\6\p\f\x\u\k\1\e\r\n\g\l\9\u\h\b\l\l\z\q\7\q\b\h\1\l\g\o\6\t\8\y\f\6\z\r\c\g\h\9\m\t\b\4\u\m\4\s\v\q\6\j\k\o\8\c\e\p\7\l\e\w\e\c\5\s\o\r\w\4\g\e\j\g\8\8\m\l\c\4\y\6\g\t\p\5\n\r\d\p\x\b\5\u\k\k\y\x\5\b\j\3\o\5\7\a\i\x\3\e\f\c\d\t\x\a\3\f\8\v\0\c\m\q\w\m\a\3\k\g\s\l\n\0\0\t\l\p\0\1\3\z\w\1\e\s\2\3\0\7\g\k\x\3\2\s\5\e\p\n\g\0\k\5\b\z\f\p\h\d\e\p\l\l\z\2\x\f\1\x\h\p\a\4\e\t\s\a\q\9\k\f\i\g\5\e\7\f\l\t\o\x\n\p\j\s\w\h\b\g\j\c\f\h\f\a\m\7\5\k\0\z\s\r\x\n\f\7\g\u\a\k\h\n\8\8\b\0\9\p\o\z\x\l\n\h\m\k\8\m\m\l\8\5\a\z\e\a\v\7\q\n\u\l\e\q\3\3\3\0\r\1\f\x\1\u\s\l\s\c\h\i\y\n\0\4\s\a\w\0\a\h\c\z\l\x\m\b\m\0\0\s\e\8\t\2\o\p\c\e\0\p\7\p\h\2\r\o\p\x\4\o\8\b\y\l\d\l\4\y\k\o\6\4\t\p\6\1\e\e\i\t\s\l\8\d\t\1\r\8\t\8\n\2\a\z\2\d\x\p\m\o\b\e\x\i\e\z ]] 00:18:52.071 20:45:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:52.071 20:45:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:18:52.347 [2024-04-15 20:45:35.576829] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:52.347 [2024-04-15 20:45:35.576990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59272 ] 00:18:52.347 [2024-04-15 20:45:35.734963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.605 [2024-04-15 20:45:35.941764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.239  Copying: 512/512 [B] (average 500 kBps) 00:18:54.239 00:18:54.239 20:45:37 -- dd/posix.sh@93 -- # [[ o8kqsqdggaj3py8bw1bpxxy3o5flc1x07ccomogaa7zgdzbyo2b6hx9t9e8q2er0rwwt67q6zcbtlb14nvak1ings57e7jo166mvfo0av8nme2w53p38c0ocgt92nbngoxd53dftd5rj2ucy6pfxuk1erngl9uhbllzq7qbh1lgo6t8yf6zrcgh9mtb4um4svq6jko8cep7lewec5sorw4gejg88mlc4y6gtp5nrdpxb5ukkyx5bj3o57aix3efcdtxa3f8v0cmqwma3kgsln00tlp013zw1es2307gkx32s5epng0k5bzfphdepllz2xf1xhpa4etsaq9kfig5e7fltoxnpjswhbgjcfhfam75k0zsrxnf7guakhn88b09pozxlnhmk8mml85azeav7qnuleq3330r1fx1uslschiyn04saw0ahczlxmbm00se8t2opce0p7ph2ropx4o8byldl4yko64tp61eeitsl8dt1r8t8n2az2dxpmobexiez == \o\8\k\q\s\q\d\g\g\a\j\3\p\y\8\b\w\1\b\p\x\x\y\3\o\5\f\l\c\1\x\0\7\c\c\o\m\o\g\a\a\7\z\g\d\z\b\y\o\2\b\6\h\x\9\t\9\e\8\q\2\e\r\0\r\w\w\t\6\7\q\6\z\c\b\t\l\b\1\4\n\v\a\k\1\i\n\g\s\5\7\e\7\j\o\1\6\6\m\v\f\o\0\a\v\8\n\m\e\2\w\5\3\p\3\8\c\0\o\c\g\t\9\2\n\b\n\g\o\x\d\5\3\d\f\t\d\5\r\j\2\u\c\y\6\p\f\x\u\k\1\e\r\n\g\l\9\u\h\b\l\l\z\q\7\q\b\h\1\l\g\o\6\t\8\y\f\6\z\r\c\g\h\9\m\t\b\4\u\m\4\s\v\q\6\j\k\o\8\c\e\p\7\l\e\w\e\c\5\s\o\r\w\4\g\e\j\g\8\8\m\l\c\4\y\6\g\t\p\5\n\r\d\p\x\b\5\u\k\k\y\x\5\b\j\3\o\5\7\a\i\x\3\e\f\c\d\t\x\a\3\f\8\v\0\c\m\q\w\m\a\3\k\g\s\l\n\0\0\t\l\p\0\1\3\z\w\1\e\s\2\3\0\7\g\k\x\3\2\s\5\e\p\n\g\0\k\5\b\z\f\p\h\d\e\p\l\l\z\2\x\f\1\x\h\p\a\4\e\t\s\a\q\9\k\f\i\g\5\e\7\f\l\t\o\x\n\p\j\s\w\h\b\g\j\c\f\h\f\a\m\7\5\k\0\z\s\r\x\n\f\7\g\u\a\k\h\n\8\8\b\0\9\p\o\z\x\l\n\h\m\k\8\m\m\l\8\5\a\z\e\a\v\7\q\n\u\l\e\q\3\3\3\0\r\1\f\x\1\u\s\l\s\c\h\i\y\n\0\4\s\a\w\0\a\h\c\z\l\x\m\b\m\0\0\s\e\8\t\2\o\p\c\e\0\p\7\p\h\2\r\o\p\x\4\o\8\b\y\l\d\l\4\y\k\o\6\4\t\p\6\1\e\e\i\t\s\l\8\d\t\1\r\8\t\8\n\2\a\z\2\d\x\p\m\o\b\e\x\i\e\z ]] 00:18:54.239 20:45:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:54.239 20:45:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:18:54.498 [2024-04-15 20:45:37.777757] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:54.498 [2024-04-15 20:45:37.777924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59307 ] 00:18:54.498 [2024-04-15 20:45:37.939621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.757 [2024-04-15 20:45:38.145081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.396  Copying: 512/512 [B] (average 166 kBps) 00:18:56.396 00:18:56.396 20:45:39 -- dd/posix.sh@93 -- # [[ o8kqsqdggaj3py8bw1bpxxy3o5flc1x07ccomogaa7zgdzbyo2b6hx9t9e8q2er0rwwt67q6zcbtlb14nvak1ings57e7jo166mvfo0av8nme2w53p38c0ocgt92nbngoxd53dftd5rj2ucy6pfxuk1erngl9uhbllzq7qbh1lgo6t8yf6zrcgh9mtb4um4svq6jko8cep7lewec5sorw4gejg88mlc4y6gtp5nrdpxb5ukkyx5bj3o57aix3efcdtxa3f8v0cmqwma3kgsln00tlp013zw1es2307gkx32s5epng0k5bzfphdepllz2xf1xhpa4etsaq9kfig5e7fltoxnpjswhbgjcfhfam75k0zsrxnf7guakhn88b09pozxlnhmk8mml85azeav7qnuleq3330r1fx1uslschiyn04saw0ahczlxmbm00se8t2opce0p7ph2ropx4o8byldl4yko64tp61eeitsl8dt1r8t8n2az2dxpmobexiez == \o\8\k\q\s\q\d\g\g\a\j\3\p\y\8\b\w\1\b\p\x\x\y\3\o\5\f\l\c\1\x\0\7\c\c\o\m\o\g\a\a\7\z\g\d\z\b\y\o\2\b\6\h\x\9\t\9\e\8\q\2\e\r\0\r\w\w\t\6\7\q\6\z\c\b\t\l\b\1\4\n\v\a\k\1\i\n\g\s\5\7\e\7\j\o\1\6\6\m\v\f\o\0\a\v\8\n\m\e\2\w\5\3\p\3\8\c\0\o\c\g\t\9\2\n\b\n\g\o\x\d\5\3\d\f\t\d\5\r\j\2\u\c\y\6\p\f\x\u\k\1\e\r\n\g\l\9\u\h\b\l\l\z\q\7\q\b\h\1\l\g\o\6\t\8\y\f\6\z\r\c\g\h\9\m\t\b\4\u\m\4\s\v\q\6\j\k\o\8\c\e\p\7\l\e\w\e\c\5\s\o\r\w\4\g\e\j\g\8\8\m\l\c\4\y\6\g\t\p\5\n\r\d\p\x\b\5\u\k\k\y\x\5\b\j\3\o\5\7\a\i\x\3\e\f\c\d\t\x\a\3\f\8\v\0\c\m\q\w\m\a\3\k\g\s\l\n\0\0\t\l\p\0\1\3\z\w\1\e\s\2\3\0\7\g\k\x\3\2\s\5\e\p\n\g\0\k\5\b\z\f\p\h\d\e\p\l\l\z\2\x\f\1\x\h\p\a\4\e\t\s\a\q\9\k\f\i\g\5\e\7\f\l\t\o\x\n\p\j\s\w\h\b\g\j\c\f\h\f\a\m\7\5\k\0\z\s\r\x\n\f\7\g\u\a\k\h\n\8\8\b\0\9\p\o\z\x\l\n\h\m\k\8\m\m\l\8\5\a\z\e\a\v\7\q\n\u\l\e\q\3\3\3\0\r\1\f\x\1\u\s\l\s\c\h\i\y\n\0\4\s\a\w\0\a\h\c\z\l\x\m\b\m\0\0\s\e\8\t\2\o\p\c\e\0\p\7\p\h\2\r\o\p\x\4\o\8\b\y\l\d\l\4\y\k\o\6\4\t\p\6\1\e\e\i\t\s\l\8\d\t\1\r\8\t\8\n\2\a\z\2\d\x\p\m\o\b\e\x\i\e\z ]] 00:18:56.396 20:45:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:56.396 20:45:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:18:56.656 [2024-04-15 20:45:39.956838] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:56.656 [2024-04-15 20:45:39.956986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59332 ] 00:18:56.656 [2024-04-15 20:45:40.110922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.914 [2024-04-15 20:45:40.304763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.547  Copying: 512/512 [B] (average 250 kBps) 00:18:58.547 00:18:58.547 20:45:41 -- dd/posix.sh@93 -- # [[ o8kqsqdggaj3py8bw1bpxxy3o5flc1x07ccomogaa7zgdzbyo2b6hx9t9e8q2er0rwwt67q6zcbtlb14nvak1ings57e7jo166mvfo0av8nme2w53p38c0ocgt92nbngoxd53dftd5rj2ucy6pfxuk1erngl9uhbllzq7qbh1lgo6t8yf6zrcgh9mtb4um4svq6jko8cep7lewec5sorw4gejg88mlc4y6gtp5nrdpxb5ukkyx5bj3o57aix3efcdtxa3f8v0cmqwma3kgsln00tlp013zw1es2307gkx32s5epng0k5bzfphdepllz2xf1xhpa4etsaq9kfig5e7fltoxnpjswhbgjcfhfam75k0zsrxnf7guakhn88b09pozxlnhmk8mml85azeav7qnuleq3330r1fx1uslschiyn04saw0ahczlxmbm00se8t2opce0p7ph2ropx4o8byldl4yko64tp61eeitsl8dt1r8t8n2az2dxpmobexiez == \o\8\k\q\s\q\d\g\g\a\j\3\p\y\8\b\w\1\b\p\x\x\y\3\o\5\f\l\c\1\x\0\7\c\c\o\m\o\g\a\a\7\z\g\d\z\b\y\o\2\b\6\h\x\9\t\9\e\8\q\2\e\r\0\r\w\w\t\6\7\q\6\z\c\b\t\l\b\1\4\n\v\a\k\1\i\n\g\s\5\7\e\7\j\o\1\6\6\m\v\f\o\0\a\v\8\n\m\e\2\w\5\3\p\3\8\c\0\o\c\g\t\9\2\n\b\n\g\o\x\d\5\3\d\f\t\d\5\r\j\2\u\c\y\6\p\f\x\u\k\1\e\r\n\g\l\9\u\h\b\l\l\z\q\7\q\b\h\1\l\g\o\6\t\8\y\f\6\z\r\c\g\h\9\m\t\b\4\u\m\4\s\v\q\6\j\k\o\8\c\e\p\7\l\e\w\e\c\5\s\o\r\w\4\g\e\j\g\8\8\m\l\c\4\y\6\g\t\p\5\n\r\d\p\x\b\5\u\k\k\y\x\5\b\j\3\o\5\7\a\i\x\3\e\f\c\d\t\x\a\3\f\8\v\0\c\m\q\w\m\a\3\k\g\s\l\n\0\0\t\l\p\0\1\3\z\w\1\e\s\2\3\0\7\g\k\x\3\2\s\5\e\p\n\g\0\k\5\b\z\f\p\h\d\e\p\l\l\z\2\x\f\1\x\h\p\a\4\e\t\s\a\q\9\k\f\i\g\5\e\7\f\l\t\o\x\n\p\j\s\w\h\b\g\j\c\f\h\f\a\m\7\5\k\0\z\s\r\x\n\f\7\g\u\a\k\h\n\8\8\b\0\9\p\o\z\x\l\n\h\m\k\8\m\m\l\8\5\a\z\e\a\v\7\q\n\u\l\e\q\3\3\3\0\r\1\f\x\1\u\s\l\s\c\h\i\y\n\0\4\s\a\w\0\a\h\c\z\l\x\m\b\m\0\0\s\e\8\t\2\o\p\c\e\0\p\7\p\h\2\r\o\p\x\4\o\8\b\y\l\d\l\4\y\k\o\6\4\t\p\6\1\e\e\i\t\s\l\8\d\t\1\r\8\t\8\n\2\a\z\2\d\x\p\m\o\b\e\x\i\e\z ]] 00:18:58.547 20:45:41 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:18:58.547 20:45:41 -- dd/posix.sh@86 -- # gen_bytes 512 00:18:58.547 20:45:41 -- dd/common.sh@98 -- # xtrace_disable 00:18:58.547 20:45:41 -- common/autotest_common.sh@10 -- # set +x 00:18:58.547 20:45:41 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:18:58.547 20:45:41 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:18:58.806 [2024-04-15 20:45:42.086383] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:58.806 [2024-04-15 20:45:42.086536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59364 ] 00:18:58.806 [2024-04-15 20:45:42.260629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.065 [2024-04-15 20:45:42.459331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.700  Copying: 512/512 [B] (average 500 kBps) 00:19:00.700 00:19:00.700 20:45:44 -- dd/posix.sh@93 -- # [[ rb8arglhpwrndlvdu506cv2hf6fv099ouipirlkom8t4bi2odapt2klbm5m17dzia5m2vaimp4ojjllzzdrvmti5yzvgua43d9yc0dnz3p5jwamykt3szfotr5k5tokiyuxd89l0ugsl3gfuqh723ncopdpt8qpeubsimxrs1z5zjo3zeul4cjdk4xre5y1i8sq2tgxyzq5uulots3kcye5p5uv7283mwn4npa5jmp5qtwu980h040ipzsbwob1mjbux5kfnb71igcnruyu1mzca6sq8zsp17561trs7m6yt7u0cakabcl64mdtnk1c4378ocb7li032yupxupb58kqd6c1ojcv7cfuyxwuxpqfnupmdp5km4tmahe8znftjh874x5251ej4ad8r4kfeb5exafxfjeqse701aim3bp3xwxhytbxg6w5iur8q1lwi6pe5x65g1h4g8lts6ol6qx1ceieuvqjsb6zp5bi2qr0vtb4r7vmfhxcmx4mnyuwq == \r\b\8\a\r\g\l\h\p\w\r\n\d\l\v\d\u\5\0\6\c\v\2\h\f\6\f\v\0\9\9\o\u\i\p\i\r\l\k\o\m\8\t\4\b\i\2\o\d\a\p\t\2\k\l\b\m\5\m\1\7\d\z\i\a\5\m\2\v\a\i\m\p\4\o\j\j\l\l\z\z\d\r\v\m\t\i\5\y\z\v\g\u\a\4\3\d\9\y\c\0\d\n\z\3\p\5\j\w\a\m\y\k\t\3\s\z\f\o\t\r\5\k\5\t\o\k\i\y\u\x\d\8\9\l\0\u\g\s\l\3\g\f\u\q\h\7\2\3\n\c\o\p\d\p\t\8\q\p\e\u\b\s\i\m\x\r\s\1\z\5\z\j\o\3\z\e\u\l\4\c\j\d\k\4\x\r\e\5\y\1\i\8\s\q\2\t\g\x\y\z\q\5\u\u\l\o\t\s\3\k\c\y\e\5\p\5\u\v\7\2\8\3\m\w\n\4\n\p\a\5\j\m\p\5\q\t\w\u\9\8\0\h\0\4\0\i\p\z\s\b\w\o\b\1\m\j\b\u\x\5\k\f\n\b\7\1\i\g\c\n\r\u\y\u\1\m\z\c\a\6\s\q\8\z\s\p\1\7\5\6\1\t\r\s\7\m\6\y\t\7\u\0\c\a\k\a\b\c\l\6\4\m\d\t\n\k\1\c\4\3\7\8\o\c\b\7\l\i\0\3\2\y\u\p\x\u\p\b\5\8\k\q\d\6\c\1\o\j\c\v\7\c\f\u\y\x\w\u\x\p\q\f\n\u\p\m\d\p\5\k\m\4\t\m\a\h\e\8\z\n\f\t\j\h\8\7\4\x\5\2\5\1\e\j\4\a\d\8\r\4\k\f\e\b\5\e\x\a\f\x\f\j\e\q\s\e\7\0\1\a\i\m\3\b\p\3\x\w\x\h\y\t\b\x\g\6\w\5\i\u\r\8\q\1\l\w\i\6\p\e\5\x\6\5\g\1\h\4\g\8\l\t\s\6\o\l\6\q\x\1\c\e\i\e\u\v\q\j\s\b\6\z\p\5\b\i\2\q\r\0\v\t\b\4\r\7\v\m\f\h\x\c\m\x\4\m\n\y\u\w\q ]] 00:19:00.700 20:45:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:00.700 20:45:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:19:00.958 [2024-04-15 20:45:44.250462] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:00.958 [2024-04-15 20:45:44.250877] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59395 ] 00:19:00.958 [2024-04-15 20:45:44.444829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.216 [2024-04-15 20:45:44.644456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.718  Copying: 512/512 [B] (average 500 kBps) 00:19:02.718 00:19:02.977 20:45:46 -- dd/posix.sh@93 -- # [[ rb8arglhpwrndlvdu506cv2hf6fv099ouipirlkom8t4bi2odapt2klbm5m17dzia5m2vaimp4ojjllzzdrvmti5yzvgua43d9yc0dnz3p5jwamykt3szfotr5k5tokiyuxd89l0ugsl3gfuqh723ncopdpt8qpeubsimxrs1z5zjo3zeul4cjdk4xre5y1i8sq2tgxyzq5uulots3kcye5p5uv7283mwn4npa5jmp5qtwu980h040ipzsbwob1mjbux5kfnb71igcnruyu1mzca6sq8zsp17561trs7m6yt7u0cakabcl64mdtnk1c4378ocb7li032yupxupb58kqd6c1ojcv7cfuyxwuxpqfnupmdp5km4tmahe8znftjh874x5251ej4ad8r4kfeb5exafxfjeqse701aim3bp3xwxhytbxg6w5iur8q1lwi6pe5x65g1h4g8lts6ol6qx1ceieuvqjsb6zp5bi2qr0vtb4r7vmfhxcmx4mnyuwq == \r\b\8\a\r\g\l\h\p\w\r\n\d\l\v\d\u\5\0\6\c\v\2\h\f\6\f\v\0\9\9\o\u\i\p\i\r\l\k\o\m\8\t\4\b\i\2\o\d\a\p\t\2\k\l\b\m\5\m\1\7\d\z\i\a\5\m\2\v\a\i\m\p\4\o\j\j\l\l\z\z\d\r\v\m\t\i\5\y\z\v\g\u\a\4\3\d\9\y\c\0\d\n\z\3\p\5\j\w\a\m\y\k\t\3\s\z\f\o\t\r\5\k\5\t\o\k\i\y\u\x\d\8\9\l\0\u\g\s\l\3\g\f\u\q\h\7\2\3\n\c\o\p\d\p\t\8\q\p\e\u\b\s\i\m\x\r\s\1\z\5\z\j\o\3\z\e\u\l\4\c\j\d\k\4\x\r\e\5\y\1\i\8\s\q\2\t\g\x\y\z\q\5\u\u\l\o\t\s\3\k\c\y\e\5\p\5\u\v\7\2\8\3\m\w\n\4\n\p\a\5\j\m\p\5\q\t\w\u\9\8\0\h\0\4\0\i\p\z\s\b\w\o\b\1\m\j\b\u\x\5\k\f\n\b\7\1\i\g\c\n\r\u\y\u\1\m\z\c\a\6\s\q\8\z\s\p\1\7\5\6\1\t\r\s\7\m\6\y\t\7\u\0\c\a\k\a\b\c\l\6\4\m\d\t\n\k\1\c\4\3\7\8\o\c\b\7\l\i\0\3\2\y\u\p\x\u\p\b\5\8\k\q\d\6\c\1\o\j\c\v\7\c\f\u\y\x\w\u\x\p\q\f\n\u\p\m\d\p\5\k\m\4\t\m\a\h\e\8\z\n\f\t\j\h\8\7\4\x\5\2\5\1\e\j\4\a\d\8\r\4\k\f\e\b\5\e\x\a\f\x\f\j\e\q\s\e\7\0\1\a\i\m\3\b\p\3\x\w\x\h\y\t\b\x\g\6\w\5\i\u\r\8\q\1\l\w\i\6\p\e\5\x\6\5\g\1\h\4\g\8\l\t\s\6\o\l\6\q\x\1\c\e\i\e\u\v\q\j\s\b\6\z\p\5\b\i\2\q\r\0\v\t\b\4\r\7\v\m\f\h\x\c\m\x\4\m\n\y\u\w\q ]] 00:19:02.977 20:45:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:02.977 20:45:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:19:02.977 [2024-04-15 20:45:46.376231] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:02.977 [2024-04-15 20:45:46.376380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59424 ] 00:19:03.236 [2024-04-15 20:45:46.548735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.494 [2024-04-15 20:45:46.739093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.128  Copying: 512/512 [B] (average 166 kBps) 00:19:05.128 00:19:05.128 20:45:48 -- dd/posix.sh@93 -- # [[ rb8arglhpwrndlvdu506cv2hf6fv099ouipirlkom8t4bi2odapt2klbm5m17dzia5m2vaimp4ojjllzzdrvmti5yzvgua43d9yc0dnz3p5jwamykt3szfotr5k5tokiyuxd89l0ugsl3gfuqh723ncopdpt8qpeubsimxrs1z5zjo3zeul4cjdk4xre5y1i8sq2tgxyzq5uulots3kcye5p5uv7283mwn4npa5jmp5qtwu980h040ipzsbwob1mjbux5kfnb71igcnruyu1mzca6sq8zsp17561trs7m6yt7u0cakabcl64mdtnk1c4378ocb7li032yupxupb58kqd6c1ojcv7cfuyxwuxpqfnupmdp5km4tmahe8znftjh874x5251ej4ad8r4kfeb5exafxfjeqse701aim3bp3xwxhytbxg6w5iur8q1lwi6pe5x65g1h4g8lts6ol6qx1ceieuvqjsb6zp5bi2qr0vtb4r7vmfhxcmx4mnyuwq == \r\b\8\a\r\g\l\h\p\w\r\n\d\l\v\d\u\5\0\6\c\v\2\h\f\6\f\v\0\9\9\o\u\i\p\i\r\l\k\o\m\8\t\4\b\i\2\o\d\a\p\t\2\k\l\b\m\5\m\1\7\d\z\i\a\5\m\2\v\a\i\m\p\4\o\j\j\l\l\z\z\d\r\v\m\t\i\5\y\z\v\g\u\a\4\3\d\9\y\c\0\d\n\z\3\p\5\j\w\a\m\y\k\t\3\s\z\f\o\t\r\5\k\5\t\o\k\i\y\u\x\d\8\9\l\0\u\g\s\l\3\g\f\u\q\h\7\2\3\n\c\o\p\d\p\t\8\q\p\e\u\b\s\i\m\x\r\s\1\z\5\z\j\o\3\z\e\u\l\4\c\j\d\k\4\x\r\e\5\y\1\i\8\s\q\2\t\g\x\y\z\q\5\u\u\l\o\t\s\3\k\c\y\e\5\p\5\u\v\7\2\8\3\m\w\n\4\n\p\a\5\j\m\p\5\q\t\w\u\9\8\0\h\0\4\0\i\p\z\s\b\w\o\b\1\m\j\b\u\x\5\k\f\n\b\7\1\i\g\c\n\r\u\y\u\1\m\z\c\a\6\s\q\8\z\s\p\1\7\5\6\1\t\r\s\7\m\6\y\t\7\u\0\c\a\k\a\b\c\l\6\4\m\d\t\n\k\1\c\4\3\7\8\o\c\b\7\l\i\0\3\2\y\u\p\x\u\p\b\5\8\k\q\d\6\c\1\o\j\c\v\7\c\f\u\y\x\w\u\x\p\q\f\n\u\p\m\d\p\5\k\m\4\t\m\a\h\e\8\z\n\f\t\j\h\8\7\4\x\5\2\5\1\e\j\4\a\d\8\r\4\k\f\e\b\5\e\x\a\f\x\f\j\e\q\s\e\7\0\1\a\i\m\3\b\p\3\x\w\x\h\y\t\b\x\g\6\w\5\i\u\r\8\q\1\l\w\i\6\p\e\5\x\6\5\g\1\h\4\g\8\l\t\s\6\o\l\6\q\x\1\c\e\i\e\u\v\q\j\s\b\6\z\p\5\b\i\2\q\r\0\v\t\b\4\r\7\v\m\f\h\x\c\m\x\4\m\n\y\u\w\q ]] 00:19:05.128 20:45:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:19:05.128 20:45:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:19:05.128 [2024-04-15 20:45:48.493415] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:05.128 [2024-04-15 20:45:48.493581] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59452 ] 00:19:05.386 [2024-04-15 20:45:48.668417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.643 [2024-04-15 20:45:48.917951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.274  Copying: 512/512 [B] (average 166 kBps) 00:19:07.274 00:19:07.274 ************************************ 00:19:07.274 END TEST dd_flags_misc_forced_aio 00:19:07.274 ************************************ 00:19:07.274 20:45:50 -- dd/posix.sh@93 -- # [[ rb8arglhpwrndlvdu506cv2hf6fv099ouipirlkom8t4bi2odapt2klbm5m17dzia5m2vaimp4ojjllzzdrvmti5yzvgua43d9yc0dnz3p5jwamykt3szfotr5k5tokiyuxd89l0ugsl3gfuqh723ncopdpt8qpeubsimxrs1z5zjo3zeul4cjdk4xre5y1i8sq2tgxyzq5uulots3kcye5p5uv7283mwn4npa5jmp5qtwu980h040ipzsbwob1mjbux5kfnb71igcnruyu1mzca6sq8zsp17561trs7m6yt7u0cakabcl64mdtnk1c4378ocb7li032yupxupb58kqd6c1ojcv7cfuyxwuxpqfnupmdp5km4tmahe8znftjh874x5251ej4ad8r4kfeb5exafxfjeqse701aim3bp3xwxhytbxg6w5iur8q1lwi6pe5x65g1h4g8lts6ol6qx1ceieuvqjsb6zp5bi2qr0vtb4r7vmfhxcmx4mnyuwq == \r\b\8\a\r\g\l\h\p\w\r\n\d\l\v\d\u\5\0\6\c\v\2\h\f\6\f\v\0\9\9\o\u\i\p\i\r\l\k\o\m\8\t\4\b\i\2\o\d\a\p\t\2\k\l\b\m\5\m\1\7\d\z\i\a\5\m\2\v\a\i\m\p\4\o\j\j\l\l\z\z\d\r\v\m\t\i\5\y\z\v\g\u\a\4\3\d\9\y\c\0\d\n\z\3\p\5\j\w\a\m\y\k\t\3\s\z\f\o\t\r\5\k\5\t\o\k\i\y\u\x\d\8\9\l\0\u\g\s\l\3\g\f\u\q\h\7\2\3\n\c\o\p\d\p\t\8\q\p\e\u\b\s\i\m\x\r\s\1\z\5\z\j\o\3\z\e\u\l\4\c\j\d\k\4\x\r\e\5\y\1\i\8\s\q\2\t\g\x\y\z\q\5\u\u\l\o\t\s\3\k\c\y\e\5\p\5\u\v\7\2\8\3\m\w\n\4\n\p\a\5\j\m\p\5\q\t\w\u\9\8\0\h\0\4\0\i\p\z\s\b\w\o\b\1\m\j\b\u\x\5\k\f\n\b\7\1\i\g\c\n\r\u\y\u\1\m\z\c\a\6\s\q\8\z\s\p\1\7\5\6\1\t\r\s\7\m\6\y\t\7\u\0\c\a\k\a\b\c\l\6\4\m\d\t\n\k\1\c\4\3\7\8\o\c\b\7\l\i\0\3\2\y\u\p\x\u\p\b\5\8\k\q\d\6\c\1\o\j\c\v\7\c\f\u\y\x\w\u\x\p\q\f\n\u\p\m\d\p\5\k\m\4\t\m\a\h\e\8\z\n\f\t\j\h\8\7\4\x\5\2\5\1\e\j\4\a\d\8\r\4\k\f\e\b\5\e\x\a\f\x\f\j\e\q\s\e\7\0\1\a\i\m\3\b\p\3\x\w\x\h\y\t\b\x\g\6\w\5\i\u\r\8\q\1\l\w\i\6\p\e\5\x\6\5\g\1\h\4\g\8\l\t\s\6\o\l\6\q\x\1\c\e\i\e\u\v\q\j\s\b\6\z\p\5\b\i\2\q\r\0\v\t\b\4\r\7\v\m\f\h\x\c\m\x\4\m\n\y\u\w\q ]] 00:19:07.274 00:19:07.274 real 0m17.238s 00:19:07.274 user 0m13.837s 00:19:07.274 sys 0m1.783s 00:19:07.274 20:45:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.274 20:45:50 -- common/autotest_common.sh@10 -- # set +x 00:19:07.274 20:45:50 -- dd/posix.sh@1 -- # cleanup 00:19:07.274 20:45:50 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:19:07.274 20:45:50 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:19:07.274 ************************************ 00:19:07.274 END TEST spdk_dd_posix 00:19:07.274 ************************************ 00:19:07.274 00:19:07.274 real 1m10.608s 00:19:07.274 user 0m54.723s 00:19:07.274 sys 0m7.429s 00:19:07.274 20:45:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:07.274 20:45:50 -- common/autotest_common.sh@10 -- # set +x 00:19:07.274 20:45:50 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:19:07.274 20:45:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:07.274 20:45:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:07.274 20:45:50 -- common/autotest_common.sh@10 -- # set +x 00:19:07.274 ************************************ 00:19:07.274 START TEST spdk_dd_malloc 00:19:07.274 ************************************ 00:19:07.274 20:45:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:19:07.274 * Looking for test storage... 00:19:07.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:07.274 20:45:50 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.274 20:45:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.274 20:45:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.274 20:45:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.274 20:45:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:07.274 20:45:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:07.274 20:45:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:07.274 20:45:50 -- paths/export.sh@5 -- # export PATH 00:19:07.274 20:45:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:07.274 20:45:50 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:19:07.274 20:45:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:07.274 20:45:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:07.274 20:45:50 -- common/autotest_common.sh@10 -- # set +x 00:19:07.274 ************************************ 00:19:07.274 START TEST dd_malloc_copy 00:19:07.274 ************************************ 00:19:07.274 20:45:50 -- common/autotest_common.sh@1104 -- # malloc_copy 00:19:07.274 20:45:50 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:19:07.274 20:45:50 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:19:07.274 20:45:50 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:19:07.274 20:45:50 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:19:07.274 20:45:50 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:19:07.274 20:45:50 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:19:07.274 20:45:50 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:19:07.274 20:45:50 -- dd/malloc.sh@28 -- # gen_conf 00:19:07.274 20:45:50 -- dd/common.sh@31 -- # xtrace_disable 00:19:07.274 20:45:50 -- common/autotest_common.sh@10 -- # set +x 00:19:07.532 { 00:19:07.532 "subsystems": [ 00:19:07.532 { 00:19:07.532 "subsystem": "bdev", 00:19:07.532 "config": [ 00:19:07.532 { 00:19:07.532 "params": { 00:19:07.532 "block_size": 512, 00:19:07.532 "name": "malloc0", 00:19:07.532 "num_blocks": 1048576 00:19:07.532 }, 00:19:07.532 "method": "bdev_malloc_create" 00:19:07.532 }, 00:19:07.532 { 00:19:07.532 "params": { 00:19:07.532 "block_size": 512, 00:19:07.532 "name": "malloc1", 00:19:07.532 "num_blocks": 1048576 00:19:07.532 }, 00:19:07.532 "method": "bdev_malloc_create" 00:19:07.532 }, 00:19:07.532 { 00:19:07.532 "method": "bdev_wait_for_examine" 00:19:07.532 } 00:19:07.532 ] 00:19:07.532 } 00:19:07.532 ] 00:19:07.532 } 00:19:07.533 [2024-04-15 20:45:50.924109] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:07.533 [2024-04-15 20:45:50.924254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59559 ] 00:19:07.790 [2024-04-15 20:45:51.091453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.790 [2024-04-15 20:45:51.286522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.524  Copying: 512/512 [MB] (average 650 MBps) 00:19:14.524 00:19:14.524 20:45:57 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:19:14.524 20:45:57 -- dd/malloc.sh@33 -- # gen_conf 00:19:14.524 20:45:57 -- dd/common.sh@31 -- # xtrace_disable 00:19:14.524 20:45:57 -- common/autotest_common.sh@10 -- # set +x 00:19:14.524 { 00:19:14.524 "subsystems": [ 00:19:14.524 { 00:19:14.524 "subsystem": "bdev", 00:19:14.524 "config": [ 00:19:14.524 { 00:19:14.524 "params": { 00:19:14.524 "block_size": 512, 00:19:14.524 "name": "malloc0", 00:19:14.524 "num_blocks": 1048576 00:19:14.524 }, 00:19:14.524 "method": "bdev_malloc_create" 00:19:14.524 }, 00:19:14.524 { 00:19:14.524 "params": { 00:19:14.524 "block_size": 512, 00:19:14.524 "name": "malloc1", 00:19:14.524 "num_blocks": 1048576 00:19:14.524 }, 00:19:14.524 "method": "bdev_malloc_create" 00:19:14.524 }, 00:19:14.524 { 00:19:14.524 "method": "bdev_wait_for_examine" 00:19:14.524 } 00:19:14.524 ] 00:19:14.524 } 00:19:14.524 ] 00:19:14.524 } 00:19:14.524 [2024-04-15 20:45:57.445932] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:14.524 [2024-04-15 20:45:57.446106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59651 ] 00:19:14.524 [2024-04-15 20:45:57.626418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.524 [2024-04-15 20:45:57.836633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.241  Copying: 512/512 [MB] (average 637 MBps) 00:19:21.241 00:19:21.241 ************************************ 00:19:21.241 END TEST dd_malloc_copy 00:19:21.241 ************************************ 00:19:21.241 00:19:21.241 real 0m13.332s 00:19:21.241 user 0m11.955s 00:19:21.241 sys 0m1.090s 00:19:21.241 20:46:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.241 20:46:04 -- common/autotest_common.sh@10 -- # set +x 00:19:21.241 ************************************ 00:19:21.241 END TEST spdk_dd_malloc 00:19:21.241 ************************************ 00:19:21.241 00:19:21.241 real 0m13.509s 00:19:21.241 user 0m12.028s 00:19:21.241 sys 0m1.200s 00:19:21.241 20:46:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.241 20:46:04 -- common/autotest_common.sh@10 -- # set +x 00:19:21.241 20:46:04 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:19:21.241 20:46:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:21.241 20:46:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:21.241 20:46:04 -- common/autotest_common.sh@10 -- # set +x 00:19:21.241 ************************************ 00:19:21.241 START TEST spdk_dd_bdev_to_bdev 00:19:21.241 ************************************ 00:19:21.241 20:46:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:19:21.241 * Looking for test storage... 00:19:21.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:21.241 20:46:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:21.241 20:46:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.241 20:46:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.241 20:46:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.241 20:46:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:21.241 20:46:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:21.241 20:46:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:21.241 20:46:04 -- paths/export.sh@5 -- # export PATH 00:19:21.241 20:46:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:19:21.241 20:46:04 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:19:21.242 20:46:04 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:19:21.242 20:46:04 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:19:21.242 20:46:04 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:19:21.242 [2024-04-15 20:46:04.411622] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:21.242 [2024-04-15 20:46:04.411873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:19:21.242 [2024-04-15 20:46:04.571273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.499 [2024-04-15 20:46:04.794217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.017  Copying: 256/256 [MB] (average 1954 MBps) 00:19:23.017 00:19:23.275 20:46:06 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:23.275 20:46:06 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:23.275 20:46:06 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:19:23.275 20:46:06 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:19:23.275 20:46:06 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:19:23.275 20:46:06 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:19:23.275 20:46:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:23.275 20:46:06 -- common/autotest_common.sh@10 -- # set +x 00:19:23.275 ************************************ 00:19:23.275 START TEST dd_inflate_file 00:19:23.275 ************************************ 00:19:23.275 20:46:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:19:23.275 [2024-04-15 20:46:06.683945] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:23.275 [2024-04-15 20:46:06.684084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59840 ] 00:19:23.534 [2024-04-15 20:46:06.854363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.792 [2024-04-15 20:46:07.043354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.424  Copying: 64/64 [MB] (average 2285 MBps) 00:19:25.424 00:19:25.424 00:19:25.424 real 0m2.033s 00:19:25.424 user 0m1.579s 00:19:25.424 sys 0m0.252s 00:19:25.424 20:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.424 ************************************ 00:19:25.424 END TEST dd_inflate_file 00:19:25.424 ************************************ 00:19:25.424 20:46:08 -- common/autotest_common.sh@10 -- # set +x 00:19:25.424 20:46:08 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:19:25.424 20:46:08 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:19:25.424 20:46:08 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:19:25.424 20:46:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:25.424 20:46:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:25.424 20:46:08 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:19:25.424 20:46:08 -- common/autotest_common.sh@10 -- # set +x 00:19:25.424 20:46:08 -- dd/common.sh@31 -- # xtrace_disable 00:19:25.424 20:46:08 -- common/autotest_common.sh@10 -- # set +x 00:19:25.424 ************************************ 00:19:25.424 START TEST dd_copy_to_out_bdev 00:19:25.424 ************************************ 00:19:25.424 20:46:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:19:25.424 { 00:19:25.424 "subsystems": [ 00:19:25.424 { 00:19:25.424 "subsystem": "bdev", 00:19:25.424 "config": [ 00:19:25.424 { 00:19:25.424 "params": { 00:19:25.424 "block_size": 4096, 00:19:25.424 "name": "aio1", 00:19:25.424 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:19:25.424 }, 00:19:25.424 "method": "bdev_aio_create" 00:19:25.424 }, 00:19:25.424 { 00:19:25.424 "params": { 00:19:25.424 "trtype": "pcie", 00:19:25.424 "name": "Nvme0", 00:19:25.424 "traddr": "0000:00:06.0" 00:19:25.424 }, 00:19:25.424 "method": "bdev_nvme_attach_controller" 00:19:25.424 }, 00:19:25.424 { 00:19:25.424 "method": "bdev_wait_for_examine" 00:19:25.424 } 00:19:25.424 ] 00:19:25.424 } 00:19:25.424 ] 00:19:25.424 } 00:19:25.424 [2024-04-15 20:46:08.793687] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:25.424 [2024-04-15 20:46:08.793841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59899 ] 00:19:25.682 [2024-04-15 20:46:08.946767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.682 [2024-04-15 20:46:09.139682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.988  Copying: 64/64 [MB] (average 107 MBps) 00:19:27.988 00:19:27.988 ************************************ 00:19:27.988 END TEST dd_copy_to_out_bdev 00:19:27.988 ************************************ 00:19:27.988 00:19:27.988 real 0m2.830s 00:19:27.988 user 0m2.429s 00:19:27.988 sys 0m0.264s 00:19:27.988 20:46:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.988 20:46:11 -- common/autotest_common.sh@10 -- # set +x 00:19:28.246 20:46:11 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:19:28.246 20:46:11 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:19:28.246 20:46:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:28.246 20:46:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:28.246 20:46:11 -- common/autotest_common.sh@10 -- # set +x 00:19:28.246 ************************************ 00:19:28.246 START TEST dd_offset_magic 00:19:28.246 ************************************ 00:19:28.246 20:46:11 -- common/autotest_common.sh@1104 -- # offset_magic 00:19:28.246 20:46:11 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:19:28.246 20:46:11 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:19:28.246 20:46:11 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:19:28.246 20:46:11 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:19:28.246 20:46:11 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:19:28.246 20:46:11 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:19:28.246 20:46:11 -- dd/common.sh@31 -- # xtrace_disable 00:19:28.246 20:46:11 -- common/autotest_common.sh@10 -- # set +x 00:19:28.246 { 00:19:28.246 "subsystems": [ 00:19:28.246 { 00:19:28.246 "subsystem": "bdev", 00:19:28.246 "config": [ 00:19:28.246 { 00:19:28.246 "params": { 00:19:28.246 "block_size": 4096, 00:19:28.246 "name": "aio1", 00:19:28.246 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:19:28.246 }, 00:19:28.246 "method": "bdev_aio_create" 00:19:28.246 }, 00:19:28.246 { 00:19:28.246 "params": { 00:19:28.246 "trtype": "pcie", 00:19:28.246 "name": "Nvme0", 00:19:28.246 "traddr": "0000:00:06.0" 00:19:28.246 }, 00:19:28.246 "method": "bdev_nvme_attach_controller" 00:19:28.246 }, 00:19:28.246 { 00:19:28.246 "method": "bdev_wait_for_examine" 00:19:28.246 } 00:19:28.246 ] 00:19:28.246 } 00:19:28.246 ] 00:19:28.246 } 00:19:28.246 [2024-04-15 20:46:11.693616] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:28.246 [2024-04-15 20:46:11.693779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59963 ] 00:19:28.504 [2024-04-15 20:46:11.839828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.816 [2024-04-15 20:46:12.030145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.765  Copying: 65/65 [MB] (average 221 MBps) 00:19:30.765 00:19:30.766 20:46:13 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:19:30.766 20:46:13 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:19:30.766 20:46:13 -- dd/common.sh@31 -- # xtrace_disable 00:19:30.766 20:46:13 -- common/autotest_common.sh@10 -- # set +x 00:19:30.766 { 00:19:30.766 "subsystems": [ 00:19:30.766 { 00:19:30.766 "subsystem": "bdev", 00:19:30.766 "config": [ 00:19:30.766 { 00:19:30.766 "params": { 00:19:30.766 "block_size": 4096, 00:19:30.766 "name": "aio1", 00:19:30.766 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:19:30.766 }, 00:19:30.766 "method": "bdev_aio_create" 00:19:30.766 }, 00:19:30.766 { 00:19:30.766 "params": { 00:19:30.766 "trtype": "pcie", 00:19:30.766 "name": "Nvme0", 00:19:30.766 "traddr": "0000:00:06.0" 00:19:30.766 }, 00:19:30.766 "method": "bdev_nvme_attach_controller" 00:19:30.766 }, 00:19:30.766 { 00:19:30.766 "method": "bdev_wait_for_examine" 00:19:30.766 } 00:19:30.766 ] 00:19:30.766 } 00:19:30.766 ] 00:19:30.766 } 00:19:30.766 [2024-04-15 20:46:14.110969] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:30.766 [2024-04-15 20:46:14.111109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60004 ] 00:19:30.766 [2024-04-15 20:46:14.257296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.023 [2024-04-15 20:46:14.458395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.964  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:32.964 00:19:32.964 20:46:16 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:19:32.964 20:46:16 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:19:32.964 20:46:16 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:19:32.964 20:46:16 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:19:32.964 20:46:16 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:19:32.964 20:46:16 -- dd/common.sh@31 -- # xtrace_disable 00:19:32.964 20:46:16 -- common/autotest_common.sh@10 -- # set +x 00:19:32.964 { 00:19:32.964 "subsystems": [ 00:19:32.964 { 00:19:32.964 "subsystem": "bdev", 00:19:32.964 "config": [ 00:19:32.964 { 00:19:32.964 "params": { 00:19:32.964 "block_size": 4096, 00:19:32.964 "name": "aio1", 00:19:32.964 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:19:32.964 }, 00:19:32.964 "method": "bdev_aio_create" 00:19:32.964 }, 00:19:32.964 { 00:19:32.964 "params": { 00:19:32.964 "trtype": "pcie", 00:19:32.964 "name": "Nvme0", 00:19:32.964 "traddr": "0000:00:06.0" 00:19:32.964 }, 00:19:32.964 "method": "bdev_nvme_attach_controller" 00:19:32.964 }, 00:19:32.964 { 00:19:32.964 "method": "bdev_wait_for_examine" 00:19:32.964 } 00:19:32.964 ] 00:19:32.964 } 00:19:32.964 ] 00:19:32.964 } 00:19:32.964 [2024-04-15 20:46:16.264759] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:32.964 [2024-04-15 20:46:16.264903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60037 ] 00:19:32.964 [2024-04-15 20:46:16.413335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.222 [2024-04-15 20:46:16.606570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.534  Copying: 65/65 [MB] (average 228 MBps) 00:19:35.534 00:19:35.534 20:46:18 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:19:35.534 20:46:18 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:19:35.534 20:46:18 -- dd/common.sh@31 -- # xtrace_disable 00:19:35.534 20:46:18 -- common/autotest_common.sh@10 -- # set +x 00:19:35.534 { 00:19:35.534 "subsystems": [ 00:19:35.534 { 00:19:35.534 "subsystem": "bdev", 00:19:35.534 "config": [ 00:19:35.534 { 00:19:35.534 "params": { 00:19:35.534 "block_size": 4096, 00:19:35.534 "name": "aio1", 00:19:35.534 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:19:35.534 }, 00:19:35.534 "method": "bdev_aio_create" 00:19:35.534 }, 00:19:35.534 { 00:19:35.534 "params": { 00:19:35.534 "trtype": "pcie", 00:19:35.534 "name": "Nvme0", 00:19:35.534 "traddr": "0000:00:06.0" 00:19:35.534 }, 00:19:35.534 "method": "bdev_nvme_attach_controller" 00:19:35.534 }, 00:19:35.534 { 00:19:35.534 "method": "bdev_wait_for_examine" 00:19:35.534 } 00:19:35.534 ] 00:19:35.534 } 00:19:35.534 ] 00:19:35.534 } 00:19:35.534 [2024-04-15 20:46:18.778720] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:35.534 [2024-04-15 20:46:18.778920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60078 ] 00:19:35.534 [2024-04-15 20:46:18.936906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.795 [2024-04-15 20:46:19.137215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.764  Copying: 1024/1024 [kB] (average 1000 MBps) 00:19:37.764 00:19:37.764 20:46:20 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:19:37.764 20:46:20 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:19:37.764 ************************************ 00:19:37.764 END TEST dd_offset_magic 00:19:37.764 ************************************ 00:19:37.764 00:19:37.764 real 0m9.374s 00:19:37.764 user 0m7.469s 00:19:37.764 sys 0m1.012s 00:19:37.764 20:46:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:37.764 20:46:20 -- common/autotest_common.sh@10 -- # set +x 00:19:37.764 20:46:20 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:19:37.764 20:46:20 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:19:37.764 20:46:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:19:37.764 20:46:20 -- dd/common.sh@11 -- # local nvme_ref= 00:19:37.764 20:46:20 -- dd/common.sh@12 -- # local size=4194330 00:19:37.764 20:46:20 -- dd/common.sh@14 -- # local bs=1048576 00:19:37.764 20:46:20 -- dd/common.sh@15 -- # local count=5 00:19:37.764 20:46:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:19:37.764 20:46:20 -- dd/common.sh@18 -- # gen_conf 00:19:37.764 20:46:20 -- dd/common.sh@31 -- # xtrace_disable 00:19:37.764 20:46:20 -- common/autotest_common.sh@10 -- # set +x 00:19:37.764 { 00:19:37.764 "subsystems": [ 00:19:37.764 { 00:19:37.764 "subsystem": "bdev", 00:19:37.764 "config": [ 00:19:37.764 { 00:19:37.764 "params": { 00:19:37.764 "block_size": 4096, 00:19:37.764 "name": "aio1", 00:19:37.764 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:19:37.764 }, 00:19:37.764 "method": "bdev_aio_create" 00:19:37.764 }, 00:19:37.764 { 00:19:37.764 "params": { 00:19:37.764 "trtype": "pcie", 00:19:37.764 "name": "Nvme0", 00:19:37.764 "traddr": "0000:00:06.0" 00:19:37.764 }, 00:19:37.764 "method": "bdev_nvme_attach_controller" 00:19:37.764 }, 00:19:37.764 { 00:19:37.764 "method": "bdev_wait_for_examine" 00:19:37.764 } 00:19:37.764 ] 00:19:37.764 } 00:19:37.764 ] 00:19:37.764 } 00:19:37.764 [2024-04-15 20:46:21.119846] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:37.764 [2024-04-15 20:46:21.120003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60124 ] 00:19:38.024 [2024-04-15 20:46:21.277255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.024 [2024-04-15 20:46:21.481191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.976  Copying: 5120/5120 [kB] (average 714 MBps) 00:19:39.976 00:19:39.976 20:46:23 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:19:39.976 20:46:23 -- dd/common.sh@10 -- # local bdev=aio1 00:19:39.976 20:46:23 -- dd/common.sh@11 -- # local nvme_ref= 00:19:39.976 20:46:23 -- dd/common.sh@12 -- # local size=4194330 00:19:39.976 20:46:23 -- dd/common.sh@14 -- # local bs=1048576 00:19:39.976 20:46:23 -- dd/common.sh@15 -- # local count=5 00:19:39.976 20:46:23 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:19:39.976 20:46:23 -- dd/common.sh@18 -- # gen_conf 00:19:39.976 20:46:23 -- dd/common.sh@31 -- # xtrace_disable 00:19:39.976 20:46:23 -- common/autotest_common.sh@10 -- # set +x 00:19:39.976 { 00:19:39.976 "subsystems": [ 00:19:39.976 { 00:19:39.976 "subsystem": "bdev", 00:19:39.976 "config": [ 00:19:39.976 { 00:19:39.976 "params": { 00:19:39.976 "block_size": 4096, 00:19:39.976 "name": "aio1", 00:19:39.976 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1" 00:19:39.976 }, 00:19:39.976 "method": "bdev_aio_create" 00:19:39.976 }, 00:19:39.976 { 00:19:39.976 "params": { 00:19:39.976 "trtype": "pcie", 00:19:39.976 "name": "Nvme0", 00:19:39.976 "traddr": "0000:00:06.0" 00:19:39.976 }, 00:19:39.976 "method": "bdev_nvme_attach_controller" 00:19:39.976 }, 00:19:39.976 { 00:19:39.976 "method": "bdev_wait_for_examine" 00:19:39.976 } 00:19:39.976 ] 00:19:39.976 } 00:19:39.976 ] 00:19:39.976 } 00:19:39.976 [2024-04-15 20:46:23.399720] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:39.976 [2024-04-15 20:46:23.399865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60161 ] 00:19:40.234 [2024-04-15 20:46:23.557227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.493 [2024-04-15 20:46:23.761872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.140  Copying: 5120/5120 [kB] (average 166 MBps) 00:19:42.140 00:19:42.140 20:46:25 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:19:42.140 00:19:42.140 real 0m21.442s 00:19:42.140 user 0m17.114s 00:19:42.140 sys 0m2.605s 00:19:42.140 20:46:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:42.140 ************************************ 00:19:42.141 END TEST spdk_dd_bdev_to_bdev 00:19:42.141 ************************************ 00:19:42.141 20:46:25 -- common/autotest_common.sh@10 -- # set +x 00:19:42.399 20:46:25 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:19:42.399 20:46:25 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:19:42.399 20:46:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:42.399 20:46:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:42.399 20:46:25 -- common/autotest_common.sh@10 -- # set +x 00:19:42.399 ************************************ 00:19:42.399 START TEST spdk_dd_sparse 00:19:42.399 ************************************ 00:19:42.399 20:46:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:19:42.399 * Looking for test storage... 00:19:42.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:42.399 20:46:25 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.399 20:46:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.399 20:46:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.399 20:46:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.399 20:46:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:42.399 20:46:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:42.399 20:46:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:42.399 20:46:25 -- paths/export.sh@5 -- # export PATH 00:19:42.399 20:46:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:42.399 20:46:25 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:19:42.399 20:46:25 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:19:42.399 20:46:25 -- dd/sparse.sh@110 -- # file1=file_zero1 00:19:42.399 20:46:25 -- dd/sparse.sh@111 -- # file2=file_zero2 00:19:42.399 20:46:25 -- dd/sparse.sh@112 -- # file3=file_zero3 00:19:42.399 20:46:25 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:19:42.399 20:46:25 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:19:42.399 20:46:25 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:19:42.399 20:46:25 -- dd/sparse.sh@118 -- # prepare 00:19:42.399 20:46:25 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:19:42.399 20:46:25 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:19:42.399 1+0 records in 00:19:42.399 1+0 records out 00:19:42.399 4194304 bytes (4.2 MB) copied, 0.00660946 s, 635 MB/s 00:19:42.399 20:46:25 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:19:42.399 1+0 records in 00:19:42.399 1+0 records out 00:19:42.399 4194304 bytes (4.2 MB) copied, 0.00600022 s, 699 MB/s 00:19:42.399 20:46:25 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:19:42.399 1+0 records in 00:19:42.399 1+0 records out 00:19:42.399 4194304 bytes (4.2 MB) copied, 0.00579587 s, 724 MB/s 00:19:42.399 20:46:25 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:19:42.399 20:46:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:42.399 20:46:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:42.399 20:46:25 -- common/autotest_common.sh@10 -- # set +x 00:19:42.399 ************************************ 00:19:42.399 START TEST dd_sparse_file_to_file 00:19:42.399 ************************************ 00:19:42.399 20:46:25 -- common/autotest_common.sh@1104 -- # file_to_file 00:19:42.399 20:46:25 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:19:42.399 20:46:25 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:19:42.399 20:46:25 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:19:42.399 20:46:25 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:19:42.399 20:46:25 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:19:42.399 20:46:25 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:19:42.399 20:46:25 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:19:42.399 20:46:25 -- dd/sparse.sh@41 -- # gen_conf 00:19:42.399 20:46:25 -- dd/common.sh@31 -- # xtrace_disable 00:19:42.399 20:46:25 -- common/autotest_common.sh@10 -- # set +x 00:19:42.669 { 00:19:42.669 "subsystems": [ 00:19:42.669 { 00:19:42.669 "subsystem": "bdev", 00:19:42.669 "config": [ 00:19:42.669 { 00:19:42.669 "params": { 00:19:42.669 "block_size": 4096, 00:19:42.669 "name": "dd_aio", 00:19:42.669 "filename": "dd_sparse_aio_disk" 00:19:42.669 }, 00:19:42.669 "method": "bdev_aio_create" 00:19:42.669 }, 00:19:42.669 { 00:19:42.669 "params": { 00:19:42.669 "bdev_name": "dd_aio", 00:19:42.669 "lvs_name": "dd_lvstore" 00:19:42.669 }, 00:19:42.669 "method": "bdev_lvol_create_lvstore" 00:19:42.669 }, 00:19:42.669 { 00:19:42.669 "method": "bdev_wait_for_examine" 00:19:42.669 } 00:19:42.669 ] 00:19:42.669 } 00:19:42.669 ] 00:19:42.669 } 00:19:42.669 [2024-04-15 20:46:26.024624] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:42.669 [2024-04-15 20:46:26.024774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60271 ] 00:19:42.940 [2024-04-15 20:46:26.181042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.940 [2024-04-15 20:46:26.381835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.882  Copying: 12/36 [MB] (average 1333 MBps) 00:19:44.882 00:19:44.882 20:46:28 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:19:44.882 20:46:28 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:19:44.882 20:46:28 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:19:44.882 20:46:28 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:19:44.882 20:46:28 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:19:44.882 20:46:28 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:19:44.882 20:46:28 -- dd/sparse.sh@52 -- # stat1_b=24576 00:19:44.882 20:46:28 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:19:44.882 20:46:28 -- dd/sparse.sh@53 -- # stat2_b=24576 00:19:44.882 20:46:28 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:19:44.882 00:19:44.882 real 0m2.295s 00:19:44.882 user 0m1.875s 00:19:44.882 sys 0m0.268s 00:19:44.882 20:46:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.882 ************************************ 00:19:44.882 END TEST dd_sparse_file_to_file 00:19:44.882 ************************************ 00:19:44.882 20:46:28 -- common/autotest_common.sh@10 -- # set +x 00:19:44.882 20:46:28 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:19:44.882 20:46:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:44.882 20:46:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:44.882 20:46:28 -- common/autotest_common.sh@10 -- # set +x 00:19:44.882 ************************************ 00:19:44.882 START TEST dd_sparse_file_to_bdev 00:19:44.882 ************************************ 00:19:44.882 20:46:28 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:19:44.882 20:46:28 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:19:44.882 20:46:28 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:19:44.882 20:46:28 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:19:44.882 20:46:28 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:19:44.882 20:46:28 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:19:44.882 20:46:28 -- dd/sparse.sh@73 -- # gen_conf 00:19:44.882 20:46:28 -- dd/common.sh@31 -- # xtrace_disable 00:19:44.882 20:46:28 -- common/autotest_common.sh@10 -- # set +x 00:19:44.882 { 00:19:44.882 "subsystems": [ 00:19:44.882 { 00:19:44.882 "subsystem": "bdev", 00:19:44.882 "config": [ 00:19:44.883 { 00:19:44.883 "params": { 00:19:44.883 "block_size": 4096, 00:19:44.883 "name": "dd_aio", 00:19:44.883 "filename": "dd_sparse_aio_disk" 00:19:44.883 }, 00:19:44.883 "method": "bdev_aio_create" 00:19:44.883 }, 00:19:44.883 { 00:19:44.883 "params": { 00:19:44.883 "thin_provision": true, 00:19:44.883 "size": 37748736, 00:19:44.883 "lvol_name": "dd_lvol", 00:19:44.883 "lvs_name": "dd_lvstore" 00:19:44.883 }, 00:19:44.883 "method": "bdev_lvol_create" 00:19:44.883 }, 00:19:44.883 { 00:19:44.883 "method": "bdev_wait_for_examine" 00:19:44.883 } 00:19:44.883 ] 00:19:44.883 } 00:19:44.883 ] 00:19:44.883 } 00:19:45.141 [2024-04-15 20:46:28.411587] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:45.141 [2024-04-15 20:46:28.411738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60332 ] 00:19:45.141 [2024-04-15 20:46:28.558303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.401 [2024-04-15 20:46:28.755283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.721 [2024-04-15 20:46:29.107507] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:19:45.721  Copying: 12/36 [MB] (average 461 MBps)[2024-04-15 20:46:29.175850] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:19:47.113 00:19:47.113 00:19:47.113 ************************************ 00:19:47.113 END TEST dd_sparse_file_to_bdev 00:19:47.113 ************************************ 00:19:47.113 00:19:47.113 real 0m2.249s 00:19:47.113 user 0m1.836s 00:19:47.113 sys 0m0.248s 00:19:47.113 20:46:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.113 20:46:30 -- common/autotest_common.sh@10 -- # set +x 00:19:47.113 20:46:30 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:19:47.113 20:46:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:47.113 20:46:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:47.113 20:46:30 -- common/autotest_common.sh@10 -- # set +x 00:19:47.113 ************************************ 00:19:47.113 START TEST dd_sparse_bdev_to_file 00:19:47.113 ************************************ 00:19:47.113 20:46:30 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:19:47.113 20:46:30 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:19:47.113 20:46:30 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:19:47.113 20:46:30 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:19:47.113 20:46:30 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:19:47.113 20:46:30 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:19:47.113 20:46:30 -- dd/sparse.sh@91 -- # gen_conf 00:19:47.113 20:46:30 -- dd/common.sh@31 -- # xtrace_disable 00:19:47.113 20:46:30 -- common/autotest_common.sh@10 -- # set +x 00:19:47.382 { 00:19:47.382 "subsystems": [ 00:19:47.382 { 00:19:47.382 "subsystem": "bdev", 00:19:47.382 "config": [ 00:19:47.382 { 00:19:47.382 "params": { 00:19:47.382 "block_size": 4096, 00:19:47.382 "name": "dd_aio", 00:19:47.382 "filename": "dd_sparse_aio_disk" 00:19:47.382 }, 00:19:47.382 "method": "bdev_aio_create" 00:19:47.382 }, 00:19:47.382 { 00:19:47.382 "method": "bdev_wait_for_examine" 00:19:47.382 } 00:19:47.382 ] 00:19:47.382 } 00:19:47.382 ] 00:19:47.382 } 00:19:47.382 [2024-04-15 20:46:30.709324] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:47.382 [2024-04-15 20:46:30.709476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60387 ] 00:19:47.382 [2024-04-15 20:46:30.862325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.654 [2024-04-15 20:46:31.047414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.642  Copying: 12/36 [MB] (average 1714 MBps) 00:19:49.642 00:19:49.642 20:46:32 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:19:49.642 20:46:32 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:19:49.642 20:46:32 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:19:49.642 20:46:32 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:19:49.642 20:46:32 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:19:49.642 20:46:32 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:19:49.642 20:46:32 -- dd/sparse.sh@102 -- # stat2_b=24576 00:19:49.642 20:46:32 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:19:49.642 ************************************ 00:19:49.642 END TEST dd_sparse_bdev_to_file 00:19:49.642 ************************************ 00:19:49.642 20:46:32 -- dd/sparse.sh@103 -- # stat3_b=24576 00:19:49.642 20:46:32 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:19:49.642 00:19:49.642 real 0m2.200s 00:19:49.642 user 0m1.806s 00:19:49.642 sys 0m0.251s 00:19:49.642 20:46:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.642 20:46:32 -- common/autotest_common.sh@10 -- # set +x 00:19:49.642 20:46:32 -- dd/sparse.sh@1 -- # cleanup 00:19:49.642 20:46:32 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:19:49.642 20:46:32 -- dd/sparse.sh@12 -- # rm file_zero1 00:19:49.642 20:46:32 -- dd/sparse.sh@13 -- # rm file_zero2 00:19:49.642 20:46:32 -- dd/sparse.sh@14 -- # rm file_zero3 00:19:49.642 ************************************ 00:19:49.642 END TEST spdk_dd_sparse 00:19:49.642 ************************************ 00:19:49.642 00:19:49.642 real 0m7.149s 00:19:49.642 user 0m5.677s 00:19:49.642 sys 0m0.998s 00:19:49.642 20:46:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.643 20:46:32 -- common/autotest_common.sh@10 -- # set +x 00:19:49.643 20:46:32 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:19:49.643 20:46:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:49.643 20:46:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:49.643 20:46:32 -- common/autotest_common.sh@10 -- # set +x 00:19:49.643 ************************************ 00:19:49.643 START TEST spdk_dd_negative 00:19:49.643 ************************************ 00:19:49.643 20:46:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:19:49.643 * Looking for test storage... 00:19:49.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:19:49.643 20:46:33 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:49.643 20:46:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.643 20:46:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.643 20:46:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.643 20:46:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:49.643 20:46:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:49.643 20:46:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:49.643 20:46:33 -- paths/export.sh@5 -- # export PATH 00:19:49.643 20:46:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:19:49.643 20:46:33 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:49.643 20:46:33 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:49.643 20:46:33 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:49.643 20:46:33 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:19:49.643 20:46:33 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:19:49.643 20:46:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:49.643 20:46:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:49.643 20:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:49.643 ************************************ 00:19:49.643 START TEST dd_invalid_arguments 00:19:49.643 ************************************ 00:19:49.643 20:46:33 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:19:49.643 20:46:33 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:49.643 20:46:33 -- common/autotest_common.sh@640 -- # local es=0 00:19:49.643 20:46:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:49.643 20:46:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.643 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:49.643 20:46:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.643 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:49.643 20:46:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.643 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:49.643 20:46:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.643 20:46:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:49.643 20:46:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:19:49.903 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:19:49.903 options: 00:19:49.903 -c, --config JSON config file (default none) 00:19:49.903 --json JSON config file (default none) 00:19:49.903 --json-ignore-init-errors 00:19:49.903 don't exit on invalid config entry 00:19:49.903 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:19:49.903 -g, --single-file-segments 00:19:49.903 force creating just one hugetlbfs file 00:19:49.903 -h, --help show this usage 00:19:49.903 -i, --shm-id shared memory ID (optional) 00:19:49.903 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:19:49.903 --lcores lcore to CPU mapping list. The list is in the format: 00:19:49.903 [<,lcores[@CPUs]>...] 00:19:49.903 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:19:49.903 Within the group, '-' is used for range separator, 00:19:49.903 ',' is used for single number separator. 00:19:49.903 '( )' can be omitted for single element group, 00:19:49.903 '@' can be omitted if cpus and lcores have the same value 00:19:49.903 -n, --mem-channels channel number of memory channels used for DPDK 00:19:49.903 -p, --main-core main (primary) core for DPDK 00:19:49.903 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:19:49.903 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:19:49.903 --disable-cpumask-locks Disable CPU core lock files. 00:19:49.903 --silence-noticelog disable notice level logging to stderr 00:19:49.903 --msg-mempool-size global message memory pool size in count (default: 262143) 00:19:49.903 -u, --no-pci disable PCI access 00:19:49.903 --wait-for-rpc wait for RPCs to initialize subsystems 00:19:49.903 --max-delay maximum reactor delay (in microseconds) 00:19:49.903 -B, --pci-blocked pci addr to block (can be used more than once) 00:19:49.903 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:19:49.903 -R, --huge-unlink unlink huge files after initialization 00:19:49.903 -v, --version print SPDK version 00:19:49.903 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:19:49.903 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:19:49.903 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:19:49.903 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:19:49.903 Tracepoints vary in size and can use more than one trace entry. 00:19:49.903 --rpcs-allowed comma-separated list of permitted RPCS 00:19:49.903 --env-context Opaque context for use of the env implementation 00:19:49.903 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:19:49.903 --no-huge run without using hugepages 00:19:49.903 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_daos, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:19:49.903 -e, --tpoint-group [:] 00:19:49.903 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:19:49.903 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:19:49.903 Groups and masks can be c/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:19:49.903 [2024-04-15 20:46:33.193410] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:19:49.903 ombined (e.g. thread,bdev:0x1). 00:19:49.903 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:19:49.903 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:19:49.903 [--------- DD Options ---------] 00:19:49.903 --if Input file. Must specify either --if or --ib. 00:19:49.903 --ib Input bdev. Must specifier either --if or --ib 00:19:49.903 --of Output file. Must specify either --of or --ob. 00:19:49.903 --ob Output bdev. Must specify either --of or --ob. 00:19:49.903 --iflag Input file flags. 00:19:49.903 --oflag Output file flags. 00:19:49.903 --bs I/O unit size (default: 4096) 00:19:49.903 --qd Queue depth (default: 2) 00:19:49.903 --count I/O unit count. The number of I/O units to copy. (default: all) 00:19:49.903 --skip Skip this many I/O units at start of input. (default: 0) 00:19:49.903 --seek Skip this many I/O units at start of output. (default: 0) 00:19:49.903 --aio Force usage of AIO. (by default io_uring is used if available) 00:19:49.903 --sparse Enable hole skipping in input target 00:19:49.903 Available iflag and oflag values: 00:19:49.903 append - append mode 00:19:49.903 direct - use direct I/O for data 00:19:49.903 directory - fail unless a directory 00:19:49.903 dsync - use synchronized I/O for data 00:19:49.903 noatime - do not update access time 00:19:49.903 noctty - do not assign controlling terminal from file 00:19:49.903 nofollow - do not follow symlinks 00:19:49.903 nonblock - use non-blocking I/O 00:19:49.903 sync - use synchronized I/O for data and metadata 00:19:49.903 20:46:33 -- common/autotest_common.sh@643 -- # es=2 00:19:49.903 20:46:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:49.903 20:46:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:49.903 20:46:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:49.903 ************************************ 00:19:49.903 END TEST dd_invalid_arguments 00:19:49.903 ************************************ 00:19:49.903 00:19:49.903 real 0m0.176s 00:19:49.903 user 0m0.045s 00:19:49.903 sys 0m0.037s 00:19:49.903 20:46:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.903 20:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:49.903 20:46:33 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:19:49.903 20:46:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:49.903 20:46:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:49.903 20:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:49.903 ************************************ 00:19:49.903 START TEST dd_double_input 00:19:49.903 ************************************ 00:19:49.903 20:46:33 -- common/autotest_common.sh@1104 -- # double_input 00:19:49.903 20:46:33 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:49.903 20:46:33 -- common/autotest_common.sh@640 -- # local es=0 00:19:49.903 20:46:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:49.903 20:46:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.903 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:49.903 20:46:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.903 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:49.903 20:46:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.903 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:49.903 20:46:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:49.903 20:46:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:49.903 20:46:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:19:50.162 [2024-04-15 20:46:33.433631] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:19:50.162 ************************************ 00:19:50.162 END TEST dd_double_input 00:19:50.162 ************************************ 00:19:50.162 20:46:33 -- common/autotest_common.sh@643 -- # es=22 00:19:50.162 20:46:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:50.162 20:46:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:50.162 20:46:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:50.162 00:19:50.162 real 0m0.179s 00:19:50.162 user 0m0.044s 00:19:50.162 sys 0m0.041s 00:19:50.162 20:46:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.162 20:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:50.162 20:46:33 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:19:50.162 20:46:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:50.162 20:46:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:50.162 20:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:50.162 ************************************ 00:19:50.162 START TEST dd_double_output 00:19:50.162 ************************************ 00:19:50.162 20:46:33 -- common/autotest_common.sh@1104 -- # double_output 00:19:50.162 20:46:33 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:50.162 20:46:33 -- common/autotest_common.sh@640 -- # local es=0 00:19:50.162 20:46:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:50.162 20:46:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.162 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.162 20:46:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.162 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.162 20:46:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.162 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.162 20:46:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.162 20:46:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:50.162 20:46:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:19:50.421 [2024-04-15 20:46:33.682015] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:19:50.421 20:46:33 -- common/autotest_common.sh@643 -- # es=22 00:19:50.421 20:46:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:50.421 20:46:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:50.421 20:46:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:50.421 ************************************ 00:19:50.421 END TEST dd_double_output 00:19:50.421 ************************************ 00:19:50.421 00:19:50.421 real 0m0.183s 00:19:50.421 user 0m0.049s 00:19:50.421 sys 0m0.040s 00:19:50.421 20:46:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.421 20:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:50.421 20:46:33 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:19:50.421 20:46:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:50.421 20:46:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:50.421 20:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:50.421 ************************************ 00:19:50.421 START TEST dd_no_input 00:19:50.421 ************************************ 00:19:50.421 20:46:33 -- common/autotest_common.sh@1104 -- # no_input 00:19:50.421 20:46:33 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:50.421 20:46:33 -- common/autotest_common.sh@640 -- # local es=0 00:19:50.421 20:46:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:50.421 20:46:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.421 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.421 20:46:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.421 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.421 20:46:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.421 20:46:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.421 20:46:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.421 20:46:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:50.421 20:46:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:19:50.680 [2024-04-15 20:46:33.928177] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:19:50.680 ************************************ 00:19:50.680 END TEST dd_no_input 00:19:50.680 ************************************ 00:19:50.680 20:46:33 -- common/autotest_common.sh@643 -- # es=22 00:19:50.680 20:46:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:50.680 20:46:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:50.680 20:46:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:50.680 00:19:50.680 real 0m0.178s 00:19:50.680 user 0m0.048s 00:19:50.680 sys 0m0.036s 00:19:50.680 20:46:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.680 20:46:33 -- common/autotest_common.sh@10 -- # set +x 00:19:50.680 20:46:34 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:19:50.680 20:46:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:50.680 20:46:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:50.680 20:46:34 -- common/autotest_common.sh@10 -- # set +x 00:19:50.680 ************************************ 00:19:50.680 START TEST dd_no_output 00:19:50.680 ************************************ 00:19:50.680 20:46:34 -- common/autotest_common.sh@1104 -- # no_output 00:19:50.680 20:46:34 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:50.680 20:46:34 -- common/autotest_common.sh@640 -- # local es=0 00:19:50.680 20:46:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:50.680 20:46:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.680 20:46:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.680 20:46:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.680 20:46:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.680 20:46:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.680 20:46:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.680 20:46:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.680 20:46:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:50.680 20:46:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:19:50.680 [2024-04-15 20:46:34.175587] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:19:50.939 ************************************ 00:19:50.939 END TEST dd_no_output 00:19:50.939 ************************************ 00:19:50.939 20:46:34 -- common/autotest_common.sh@643 -- # es=22 00:19:50.939 20:46:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:50.939 20:46:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:50.939 20:46:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:50.939 00:19:50.939 real 0m0.183s 00:19:50.939 user 0m0.046s 00:19:50.939 sys 0m0.042s 00:19:50.939 20:46:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:50.939 20:46:34 -- common/autotest_common.sh@10 -- # set +x 00:19:50.939 20:46:34 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:19:50.939 20:46:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:50.939 20:46:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:50.939 20:46:34 -- common/autotest_common.sh@10 -- # set +x 00:19:50.939 ************************************ 00:19:50.939 START TEST dd_wrong_blocksize 00:19:50.939 ************************************ 00:19:50.939 20:46:34 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:19:50.939 20:46:34 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:50.939 20:46:34 -- common/autotest_common.sh@640 -- # local es=0 00:19:50.939 20:46:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:50.939 20:46:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.939 20:46:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.939 20:46:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.939 20:46:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.939 20:46:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.939 20:46:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:50.939 20:46:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:50.939 20:46:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:50.939 20:46:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:19:50.939 [2024-04-15 20:46:34.423946] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:19:51.197 20:46:34 -- common/autotest_common.sh@643 -- # es=22 00:19:51.197 20:46:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:51.197 20:46:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:51.197 20:46:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:51.197 ************************************ 00:19:51.197 END TEST dd_wrong_blocksize 00:19:51.197 ************************************ 00:19:51.197 00:19:51.197 real 0m0.181s 00:19:51.197 user 0m0.036s 00:19:51.197 sys 0m0.050s 00:19:51.197 20:46:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.197 20:46:34 -- common/autotest_common.sh@10 -- # set +x 00:19:51.197 20:46:34 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:19:51.197 20:46:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:51.197 20:46:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:51.197 20:46:34 -- common/autotest_common.sh@10 -- # set +x 00:19:51.197 ************************************ 00:19:51.197 START TEST dd_smaller_blocksize 00:19:51.197 ************************************ 00:19:51.197 20:46:34 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:19:51.197 20:46:34 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:51.197 20:46:34 -- common/autotest_common.sh@640 -- # local es=0 00:19:51.197 20:46:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:51.197 20:46:34 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:51.197 20:46:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:51.197 20:46:34 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:51.197 20:46:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:51.197 20:46:34 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:51.197 20:46:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:51.197 20:46:34 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:51.197 20:46:34 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:51.197 20:46:34 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:19:51.197 [2024-04-15 20:46:34.668347] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:51.197 [2024-04-15 20:46:34.668504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60685 ] 00:19:51.457 [2024-04-15 20:46:34.842187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.717 [2024-04-15 20:46:35.051854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.288 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:19:52.288 [2024-04-15 20:46:35.595192] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:19:52.288 [2024-04-15 20:46:35.595268] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:53.225 [2024-04-15 20:46:36.440475] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:53.484 ************************************ 00:19:53.484 END TEST dd_smaller_blocksize 00:19:53.484 ************************************ 00:19:53.484 20:46:36 -- common/autotest_common.sh@643 -- # es=244 00:19:53.484 20:46:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:53.484 20:46:36 -- common/autotest_common.sh@652 -- # es=116 00:19:53.484 20:46:36 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:53.484 20:46:36 -- common/autotest_common.sh@660 -- # es=1 00:19:53.484 20:46:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:53.484 00:19:53.484 real 0m2.299s 00:19:53.484 user 0m1.736s 00:19:53.484 sys 0m0.368s 00:19:53.484 20:46:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.484 20:46:36 -- common/autotest_common.sh@10 -- # set +x 00:19:53.484 20:46:36 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:19:53.484 20:46:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:53.484 20:46:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:53.484 20:46:36 -- common/autotest_common.sh@10 -- # set +x 00:19:53.484 ************************************ 00:19:53.484 START TEST dd_invalid_count 00:19:53.484 ************************************ 00:19:53.484 20:46:36 -- common/autotest_common.sh@1104 -- # invalid_count 00:19:53.484 20:46:36 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:53.484 20:46:36 -- common/autotest_common.sh@640 -- # local es=0 00:19:53.484 20:46:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:53.484 20:46:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.484 20:46:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:53.484 20:46:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.484 20:46:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:53.484 20:46:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.484 20:46:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:53.484 20:46:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.484 20:46:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:53.484 20:46:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:19:53.743 [2024-04-15 20:46:37.025957] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:19:53.743 20:46:37 -- common/autotest_common.sh@643 -- # es=22 00:19:53.743 20:46:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:53.743 20:46:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:53.743 20:46:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:53.743 00:19:53.743 real 0m0.173s 00:19:53.743 user 0m0.042s 00:19:53.743 sys 0m0.036s 00:19:53.743 20:46:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:53.743 20:46:37 -- common/autotest_common.sh@10 -- # set +x 00:19:53.743 ************************************ 00:19:53.743 END TEST dd_invalid_count 00:19:53.743 ************************************ 00:19:53.743 20:46:37 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:19:53.743 20:46:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:53.743 20:46:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:53.743 20:46:37 -- common/autotest_common.sh@10 -- # set +x 00:19:53.743 ************************************ 00:19:53.743 START TEST dd_invalid_oflag 00:19:53.743 ************************************ 00:19:53.743 20:46:37 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:19:53.744 20:46:37 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:53.744 20:46:37 -- common/autotest_common.sh@640 -- # local es=0 00:19:53.744 20:46:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:53.744 20:46:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.744 20:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:53.744 20:46:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.744 20:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:53.744 20:46:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.744 20:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:53.744 20:46:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:53.744 20:46:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:53.744 20:46:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:19:54.003 [2024-04-15 20:46:37.263585] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:19:54.003 ************************************ 00:19:54.003 END TEST dd_invalid_oflag 00:19:54.003 ************************************ 00:19:54.003 20:46:37 -- common/autotest_common.sh@643 -- # es=22 00:19:54.003 20:46:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:54.003 20:46:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:54.003 20:46:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:54.003 00:19:54.003 real 0m0.176s 00:19:54.003 user 0m0.046s 00:19:54.003 sys 0m0.036s 00:19:54.003 20:46:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.003 20:46:37 -- common/autotest_common.sh@10 -- # set +x 00:19:54.003 20:46:37 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:19:54.003 20:46:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:54.003 20:46:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:54.003 20:46:37 -- common/autotest_common.sh@10 -- # set +x 00:19:54.003 ************************************ 00:19:54.003 START TEST dd_invalid_iflag 00:19:54.003 ************************************ 00:19:54.003 20:46:37 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:19:54.003 20:46:37 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:54.003 20:46:37 -- common/autotest_common.sh@640 -- # local es=0 00:19:54.003 20:46:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:54.003 20:46:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.003 20:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.003 20:46:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.003 20:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.003 20:46:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.003 20:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.003 20:46:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.003 20:46:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:54.003 20:46:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:19:54.263 [2024-04-15 20:46:37.510874] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:19:54.263 ************************************ 00:19:54.263 END TEST dd_invalid_iflag 00:19:54.263 ************************************ 00:19:54.263 20:46:37 -- common/autotest_common.sh@643 -- # es=22 00:19:54.263 20:46:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:54.263 20:46:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:54.263 20:46:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:54.263 00:19:54.263 real 0m0.181s 00:19:54.263 user 0m0.045s 00:19:54.263 sys 0m0.042s 00:19:54.263 20:46:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.263 20:46:37 -- common/autotest_common.sh@10 -- # set +x 00:19:54.263 20:46:37 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:19:54.263 20:46:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:54.263 20:46:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:54.263 20:46:37 -- common/autotest_common.sh@10 -- # set +x 00:19:54.263 ************************************ 00:19:54.263 START TEST dd_unknown_flag 00:19:54.263 ************************************ 00:19:54.263 20:46:37 -- common/autotest_common.sh@1104 -- # unknown_flag 00:19:54.263 20:46:37 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:54.263 20:46:37 -- common/autotest_common.sh@640 -- # local es=0 00:19:54.263 20:46:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:54.263 20:46:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.263 20:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.263 20:46:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.263 20:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.263 20:46:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.263 20:46:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:54.263 20:46:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.263 20:46:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:54.263 20:46:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:19:54.263 [2024-04-15 20:46:37.751220] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:54.263 [2024-04-15 20:46:37.751370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60815 ] 00:19:54.522 [2024-04-15 20:46:37.906049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.780 [2024-04-15 20:46:38.090993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.041 [2024-04-15 20:46:38.402043] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:19:55.041 [2024-04-15 20:46:38.402117] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:19:55.041 [2024-04-15 20:46:38.402138] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:19:55.041 [2024-04-15 20:46:38.402178] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:55.977 [2024-04-15 20:46:39.253339] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:56.236 ************************************ 00:19:56.236 END TEST dd_unknown_flag 00:19:56.236 ************************************ 00:19:56.236 20:46:39 -- common/autotest_common.sh@643 -- # es=234 00:19:56.236 20:46:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:56.236 20:46:39 -- common/autotest_common.sh@652 -- # es=106 00:19:56.236 20:46:39 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:56.236 20:46:39 -- common/autotest_common.sh@660 -- # es=1 00:19:56.236 20:46:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:56.236 00:19:56.236 real 0m2.027s 00:19:56.236 user 0m1.614s 00:19:56.236 sys 0m0.218s 00:19:56.236 20:46:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.236 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:19:56.236 20:46:39 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:19:56.236 20:46:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:56.236 20:46:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:56.236 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:19:56.236 ************************************ 00:19:56.236 START TEST dd_invalid_json 00:19:56.236 ************************************ 00:19:56.236 20:46:39 -- common/autotest_common.sh@1104 -- # invalid_json 00:19:56.236 20:46:39 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:56.236 20:46:39 -- common/autotest_common.sh@640 -- # local es=0 00:19:56.236 20:46:39 -- dd/negative_dd.sh@95 -- # : 00:19:56.236 20:46:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:56.236 20:46:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:56.236 20:46:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.236 20:46:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:56.236 20:46:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.236 20:46:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:56.236 20:46:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.236 20:46:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:56.236 20:46:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:19:56.236 20:46:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:19:56.495 [2024-04-15 20:46:39.846354] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:56.495 [2024-04-15 20:46:39.846506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60868 ] 00:19:56.754 [2024-04-15 20:46:40.007765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.754 [2024-04-15 20:46:40.208170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.754 [2024-04-15 20:46:40.208354] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:19:56.754 [2024-04-15 20:46:40.208389] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:56.754 [2024-04-15 20:46:40.208433] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:19:57.319 20:46:40 -- common/autotest_common.sh@643 -- # es=234 00:19:57.319 20:46:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:57.319 20:46:40 -- common/autotest_common.sh@652 -- # es=106 00:19:57.319 20:46:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:19:57.319 20:46:40 -- common/autotest_common.sh@660 -- # es=1 00:19:57.319 20:46:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:57.319 00:19:57.319 real 0m0.958s 00:19:57.319 user 0m0.639s 00:19:57.319 sys 0m0.126s 00:19:57.319 20:46:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.319 20:46:40 -- common/autotest_common.sh@10 -- # set +x 00:19:57.319 ************************************ 00:19:57.319 END TEST dd_invalid_json 00:19:57.319 ************************************ 00:19:57.319 ************************************ 00:19:57.319 END TEST spdk_dd_negative 00:19:57.319 ************************************ 00:19:57.319 00:19:57.319 real 0m7.800s 00:19:57.319 user 0m4.717s 00:19:57.319 sys 0m1.645s 00:19:57.319 20:46:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.319 20:46:40 -- common/autotest_common.sh@10 -- # set +x 00:19:57.319 00:19:57.319 real 2m53.053s 00:19:57.319 user 2m17.338s 00:19:57.319 sys 0m20.212s 00:19:57.319 20:46:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:57.319 20:46:40 -- common/autotest_common.sh@10 -- # set +x 00:19:57.319 ************************************ 00:19:57.319 END TEST spdk_dd 00:19:57.319 ************************************ 00:19:57.319 20:46:40 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:19:57.319 20:46:40 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:19:57.319 20:46:40 -- spdk/autotest.sh@268 -- # timing_exit lib 00:19:57.319 20:46:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:57.319 20:46:40 -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 20:46:40 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:57.607 20:46:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:57.607 20:46:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:57.607 20:46:40 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:57.607 20:46:40 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:19:57.607 20:46:40 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:19:57.607 20:46:40 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:19:57.607 20:46:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:57.607 20:46:40 -- common/autotest_common.sh@10 -- # set +x 00:19:57.607 20:46:40 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:19:57.607 20:46:40 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:19:57.607 20:46:40 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:19:57.607 20:46:40 -- common/autotest_common.sh@10 -- # set +x 00:19:58.982 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:19:58.982 Waiting for block devices as requested 00:19:58.982 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:19:59.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1, so not binding PCI dev 00:19:59.550 Cleaning 00:19:59.550 Removing: /var/run/dpdk/spdk0/config 00:19:59.550 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:59.550 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:59.550 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:59.550 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:59.550 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:59.550 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:59.550 Removing: /dev/shm/spdk_tgt_trace.pid39223 00:19:59.550 Removing: /var/run/dpdk/spdk0 00:19:59.550 Removing: /var/run/dpdk/spdk_pid38956 00:19:59.550 Removing: /var/run/dpdk/spdk_pid39223 00:19:59.550 Removing: /var/run/dpdk/spdk_pid39543 00:19:59.550 Removing: /var/run/dpdk/spdk_pid39794 00:19:59.550 Removing: /var/run/dpdk/spdk_pid39979 00:19:59.550 Removing: /var/run/dpdk/spdk_pid40099 00:19:59.550 Removing: /var/run/dpdk/spdk_pid40220 00:19:59.550 Removing: /var/run/dpdk/spdk_pid40352 00:19:59.550 Removing: /var/run/dpdk/spdk_pid40475 00:19:59.550 Removing: /var/run/dpdk/spdk_pid40528 00:19:59.550 Removing: /var/run/dpdk/spdk_pid40578 00:19:59.550 Removing: /var/run/dpdk/spdk_pid40667 00:19:59.550 Removing: /var/run/dpdk/spdk_pid40841 00:19:59.550 Removing: /var/run/dpdk/spdk_pid40930 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41023 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41058 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41224 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41259 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41425 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41448 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41530 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41555 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41627 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41659 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41857 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41910 00:19:59.550 Removing: /var/run/dpdk/spdk_pid41953 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42045 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42139 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42187 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42290 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42324 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42383 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42417 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42481 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42515 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42569 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42615 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42667 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42713 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42760 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42806 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42864 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42900 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42957 00:19:59.550 Removing: /var/run/dpdk/spdk_pid42993 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43054 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43095 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43147 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43190 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43237 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43283 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43337 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43380 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43434 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43472 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43527 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43630 00:19:59.550 Removing: /var/run/dpdk/spdk_pid43771 00:19:59.809 Removing: /var/run/dpdk/spdk_pid43977 00:19:59.809 Removing: /var/run/dpdk/spdk_pid44069 00:19:59.809 Removing: /var/run/dpdk/spdk_pid44131 00:19:59.809 Removing: /var/run/dpdk/spdk_pid44281 00:19:59.809 Removing: /var/run/dpdk/spdk_pid44521 00:19:59.809 Removing: /var/run/dpdk/spdk_pid44757 00:19:59.810 Removing: /var/run/dpdk/spdk_pid44887 00:19:59.810 Removing: /var/run/dpdk/spdk_pid45028 00:19:59.810 Removing: /var/run/dpdk/spdk_pid45114 00:19:59.810 Removing: /var/run/dpdk/spdk_pid45152 00:19:59.810 Removing: /var/run/dpdk/spdk_pid45191 00:19:59.810 Removing: /var/run/dpdk/spdk_pid45688 00:19:59.810 Removing: /var/run/dpdk/spdk_pid45793 00:19:59.810 Removing: /var/run/dpdk/spdk_pid45931 00:19:59.810 Removing: /var/run/dpdk/spdk_pid46009 00:19:59.810 Removing: /var/run/dpdk/spdk_pid46909 00:19:59.810 Removing: /var/run/dpdk/spdk_pid47771 00:19:59.810 Removing: /var/run/dpdk/spdk_pid48658 00:19:59.810 Removing: /var/run/dpdk/spdk_pid49747 00:19:59.810 Removing: /var/run/dpdk/spdk_pid50818 00:19:59.810 Removing: /var/run/dpdk/spdk_pid51881 00:19:59.810 Removing: /var/run/dpdk/spdk_pid53332 00:19:59.810 Removing: /var/run/dpdk/spdk_pid54508 00:19:59.810 Removing: /var/run/dpdk/spdk_pid55679 00:19:59.810 Removing: /var/run/dpdk/spdk_pid56376 00:19:59.810 Removing: /var/run/dpdk/spdk_pid56441 00:19:59.810 Removing: /var/run/dpdk/spdk_pid56508 00:19:59.810 Removing: /var/run/dpdk/spdk_pid56571 00:19:59.810 Removing: /var/run/dpdk/spdk_pid56722 00:19:59.810 Removing: /var/run/dpdk/spdk_pid56881 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57116 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57376 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57391 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57455 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57489 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57522 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57565 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57597 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57630 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57674 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57705 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57740 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57783 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57815 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57843 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57888 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57921 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57950 00:19:59.810 Removing: /var/run/dpdk/spdk_pid57987 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58014 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58057 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58110 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58146 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58193 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58283 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58339 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58371 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58421 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58457 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58484 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58557 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58587 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58631 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58664 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58697 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58726 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58750 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58779 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58808 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58841 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58898 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58952 00:19:59.810 Removing: /var/run/dpdk/spdk_pid58979 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59031 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59066 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59092 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59157 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59188 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59239 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59272 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59307 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59332 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59364 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59395 00:19:59.810 Removing: /var/run/dpdk/spdk_pid59424 00:20:00.068 Removing: /var/run/dpdk/spdk_pid59452 00:20:00.068 Removing: /var/run/dpdk/spdk_pid59559 00:20:00.068 Removing: /var/run/dpdk/spdk_pid59651 00:20:00.068 Removing: /var/run/dpdk/spdk_pid59813 00:20:00.068 Removing: /var/run/dpdk/spdk_pid59840 00:20:00.068 Removing: /var/run/dpdk/spdk_pid59899 00:20:00.068 Removing: /var/run/dpdk/spdk_pid59963 00:20:00.068 Removing: /var/run/dpdk/spdk_pid60004 00:20:00.068 Removing: /var/run/dpdk/spdk_pid60037 00:20:00.069 Removing: /var/run/dpdk/spdk_pid60078 00:20:00.069 Removing: /var/run/dpdk/spdk_pid60124 00:20:00.069 Removing: /var/run/dpdk/spdk_pid60161 00:20:00.069 Removing: /var/run/dpdk/spdk_pid60271 00:20:00.069 Removing: /var/run/dpdk/spdk_pid60332 00:20:00.069 Removing: /var/run/dpdk/spdk_pid60387 00:20:00.069 Removing: /var/run/dpdk/spdk_pid60685 00:20:00.069 Removing: /var/run/dpdk/spdk_pid60815 00:20:00.069 Removing: /var/run/dpdk/spdk_pid60868 00:20:00.069 Clean 00:20:00.069 killing process with pid 30765 00:20:00.069 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: 30765 Terminated "$rootdir/scripts/perf/pm/collect-cpu-load" -d "$output_dir/power" > /dev/null (wd: /home/vagrant/spdk_repo) 00:20:00.069 killing process with pid 30766 00:20:00.069 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: 30766 Terminated "$rootdir/scripts/perf/pm/collect-vmstat" -d "$output_dir/power" > /dev/null (wd: /home/vagrant/spdk_repo) 00:20:00.069 20:46:43 -- common/autotest_common.sh@1436 -- # return 0 00:20:00.069 20:46:43 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:20:00.069 20:46:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:00.069 20:46:43 -- common/autotest_common.sh@10 -- # set +x 00:20:00.069 20:46:43 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:20:00.069 20:46:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:00.069 20:46:43 -- common/autotest_common.sh@10 -- # set +x 00:20:00.327 20:46:43 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:00.327 20:46:43 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:00.327 20:46:43 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:00.327 20:46:43 -- spdk/autotest.sh@394 -- # hash lcov 00:20:00.327 20:46:43 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:20:00.327 20:46:43 -- spdk/autotest.sh@396 -- # hostname 00:20:00.327 20:46:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t centos7-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:00.327 geninfo: WARNING: invalid characters removed from testname! 00:20:56.570 20:47:30 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:56.570 20:47:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:56.570 20:47:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:57.144 20:47:40 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:59.698 20:47:42 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:02.241 20:47:45 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:04.773 20:47:48 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:05.033 20:47:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:05.033 20:47:48 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:05.033 20:47:48 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.033 20:47:48 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.033 20:47:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:05.033 20:47:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:05.033 20:47:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:05.033 20:47:48 -- paths/export.sh@5 -- $ export PATH 00:21:05.033 20:47:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:21:05.033 20:47:48 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:05.033 20:47:48 -- common/autobuild_common.sh@435 -- $ date +%s 00:21:05.033 20:47:48 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713214068.XXXXXX 00:21:05.033 20:47:48 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713214068.VkQKSW 00:21:05.033 20:47:48 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:21:05.033 20:47:48 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:21:05.033 20:47:48 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:21:05.033 20:47:48 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:05.033 20:47:48 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:05.033 20:47:48 -- common/autobuild_common.sh@451 -- $ get_config_params 00:21:05.033 20:47:48 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:21:05.033 20:47:48 -- common/autotest_common.sh@10 -- $ set +x 00:21:05.033 20:47:48 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos' 00:21:05.033 20:47:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:05.033 20:47:48 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:05.033 20:47:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:05.033 20:47:48 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:21:05.033 20:47:48 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:21:05.033 20:47:48 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:21:05.033 20:47:48 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:21:05.033 20:47:48 -- common/autotest_common.sh@10 -- $ set +x 00:21:05.033 20:47:48 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:21:05.033 20:47:48 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:21:05.033 20:47:48 -- spdk/autopackage.sh@40 -- $ get_config_params 00:21:05.033 20:47:48 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:21:05.033 20:47:48 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:21:05.033 20:47:48 -- common/autotest_common.sh@10 -- $ set +x 00:21:05.033 20:47:48 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos' 00:21:05.033 20:47:48 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-asan --enable-coverage --with-daos --enable-lto 00:21:05.033 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:21:05.033 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:21:05.293 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:21:05.293 Using 'verbs' RDMA provider 00:21:05.860 WARNING: ISA-L & DPDK crypto cannot be used as nasm ver must be 2.14 or newer. 00:21:05.860 Without ISA-L, there is no software support for crypto or compression, 00:21:05.860 so these features will be disabled. 00:21:06.120 Creating mk/config.mk...done. 00:21:06.120 Creating mk/cc.flags.mk...done. 00:21:06.120 Type 'make' to build. 00:21:06.120 20:47:49 -- spdk/autopackage.sh@43 -- $ make -j10 00:21:06.379 make[1]: Nothing to be done for 'all'. 00:21:11.657 The Meson build system 00:21:11.657 Version: 0.61.5 00:21:11.657 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:21:11.657 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:21:11.657 Build type: native build 00:21:11.657 Program cat found: YES (/bin/cat) 00:21:11.657 Project name: DPDK 00:21:11.657 Project version: 23.11.0 00:21:11.657 C compiler for the host machine: cc (gcc 10.2.1 "cc (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)") 00:21:11.657 C linker for the host machine: cc ld.bfd 2.35-5 00:21:11.657 Host machine cpu family: x86_64 00:21:11.657 Host machine cpu: x86_64 00:21:11.657 Message: ## Building in Developer Mode ## 00:21:11.657 Program pkg-config found: YES (/bin/pkg-config) 00:21:11.657 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:21:11.657 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:21:11.657 Program python3 found: YES (/usr/bin/python3) 00:21:11.657 Program cat found: YES (/bin/cat) 00:21:11.657 Compiler for C supports arguments -march=native: YES 00:21:11.657 Checking for size of "void *" : 8 00:21:11.657 Checking for size of "void *" : 8 00:21:11.657 Library m found: YES 00:21:11.657 Library numa found: YES 00:21:11.657 Has header "numaif.h" : YES 00:21:11.657 Library fdt found: NO 00:21:11.657 Library execinfo found: NO 00:21:11.657 Has header "execinfo.h" : YES 00:21:11.657 Found pkg-config: /bin/pkg-config (0.27.1) 00:21:11.657 Run-time dependency libarchive found: NO (tried pkgconfig) 00:21:11.657 Run-time dependency libbsd found: NO (tried pkgconfig) 00:21:11.657 Run-time dependency jansson found: NO (tried pkgconfig) 00:21:11.657 Run-time dependency openssl found: YES 1.0.2k 00:21:11.657 Run-time dependency libpcap found: NO (tried pkgconfig) 00:21:11.657 Library pcap found: NO 00:21:11.657 Compiler for C supports arguments -Wcast-qual: YES 00:21:11.657 Compiler for C supports arguments -Wdeprecated: YES 00:21:11.657 Compiler for C supports arguments -Wformat: YES 00:21:11.657 Compiler for C supports arguments -Wformat-nonliteral: NO 00:21:11.657 Compiler for C supports arguments -Wformat-security: NO 00:21:11.657 Compiler for C supports arguments -Wmissing-declarations: YES 00:21:11.657 Compiler for C supports arguments -Wmissing-prototypes: YES 00:21:11.657 Compiler for C supports arguments -Wnested-externs: YES 00:21:11.657 Compiler for C supports arguments -Wold-style-definition: YES 00:21:11.657 Compiler for C supports arguments -Wpointer-arith: YES 00:21:11.657 Compiler for C supports arguments -Wsign-compare: YES 00:21:11.657 Compiler for C supports arguments -Wstrict-prototypes: YES 00:21:11.657 Compiler for C supports arguments -Wundef: YES 00:21:11.657 Compiler for C supports arguments -Wwrite-strings: YES 00:21:11.657 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:21:11.657 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:21:11.657 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:21:11.657 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:21:11.657 Program objdump found: YES (/bin/objdump) 00:21:11.657 Compiler for C supports arguments -mavx512f: YES 00:21:11.657 Checking if "AVX512 checking" compiles: YES 00:21:11.657 Fetching value of define "__SSE4_2__" : 1 00:21:11.657 Fetching value of define "__AES__" : 1 00:21:11.658 Fetching value of define "__AVX__" : 1 00:21:11.658 Fetching value of define "__AVX2__" : 1 00:21:11.658 Fetching value of define "__AVX512BW__" : 1 00:21:11.658 Fetching value of define "__AVX512CD__" : 1 00:21:11.658 Fetching value of define "__AVX512DQ__" : 1 00:21:11.658 Fetching value of define "__AVX512F__" : 1 00:21:11.658 Fetching value of define "__AVX512VL__" : 1 00:21:11.658 Fetching value of define "__PCLMUL__" : 1 00:21:11.658 Fetching value of define "__RDRND__" : 1 00:21:11.658 Fetching value of define "__RDSEED__" : 1 00:21:11.658 Fetching value of define "__VPCLMULQDQ__" : 00:21:11.658 Fetching value of define "__znver1__" : 00:21:11.658 Fetching value of define "__znver2__" : 00:21:11.658 Fetching value of define "__znver3__" : 00:21:11.658 Fetching value of define "__znver4__" : 00:21:11.658 Compiler for C supports arguments -ffat-lto-objects: YES 00:21:11.658 Library asan found: YES 00:21:11.658 Compiler for C supports arguments -Wno-format-truncation: YES 00:21:11.658 Message: lib/log: Defining dependency "log" 00:21:11.658 Message: lib/kvargs: Defining dependency "kvargs" 00:21:11.658 Message: lib/telemetry: Defining dependency "telemetry" 00:21:11.658 Library rt found: YES 00:21:11.658 Checking for function "getentropy" : NO 00:21:11.658 Message: lib/eal: Defining dependency "eal" 00:21:11.658 Message: lib/ring: Defining dependency "ring" 00:21:11.658 Message: lib/rcu: Defining dependency "rcu" 00:21:11.658 Message: lib/mempool: Defining dependency "mempool" 00:21:11.658 Message: lib/mbuf: Defining dependency "mbuf" 00:21:11.658 Fetching value of define "__PCLMUL__" : 1 (cached) 00:21:11.658 Fetching value of define "__AVX512F__" : 1 (cached) 00:21:12.260 Fetching value of define "__AVX512BW__" : 1 (cached) 00:21:12.260 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:21:12.260 Fetching value of define "__AVX512VL__" : 1 (cached) 00:21:12.260 Fetching value of define "__VPCLMULQDQ__" : (cached) 00:21:12.260 Compiler for C supports arguments -mpclmul: YES 00:21:12.260 Compiler for C supports arguments -maes: YES 00:21:12.260 Compiler for C supports arguments -mavx512f: YES (cached) 00:21:12.260 Compiler for C supports arguments -mavx512bw: YES 00:21:12.260 Compiler for C supports arguments -mavx512dq: YES 00:21:12.260 Compiler for C supports arguments -mavx512vl: YES 00:21:12.260 Compiler for C supports arguments -mvpclmulqdq: YES 00:21:12.260 Compiler for C supports arguments -mavx2: YES 00:21:12.260 Compiler for C supports arguments -mavx: YES 00:21:12.260 Message: lib/net: Defining dependency "net" 00:21:12.260 Message: lib/meter: Defining dependency "meter" 00:21:12.260 Message: lib/ethdev: Defining dependency "ethdev" 00:21:12.260 Message: lib/pci: Defining dependency "pci" 00:21:12.260 Message: lib/cmdline: Defining dependency "cmdline" 00:21:12.260 Message: lib/hash: Defining dependency "hash" 00:21:12.260 Message: lib/timer: Defining dependency "timer" 00:21:12.260 Message: lib/compressdev: Defining dependency "compressdev" 00:21:12.260 Message: lib/cryptodev: Defining dependency "cryptodev" 00:21:12.260 Message: lib/dmadev: Defining dependency "dmadev" 00:21:12.260 Compiler for C supports arguments -Wno-cast-qual: YES 00:21:12.260 Message: lib/power: Defining dependency "power" 00:21:12.260 Message: lib/reorder: Defining dependency "reorder" 00:21:12.260 Message: lib/security: Defining dependency "security" 00:21:12.260 Has header "linux/userfaultfd.h" : YES 00:21:12.260 Has header "linux/vduse.h" : NO 00:21:12.260 Message: lib/vhost: Defining dependency "vhost" 00:21:12.260 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:21:12.260 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:21:12.260 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:21:12.260 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:21:12.260 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:21:12.260 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:21:12.260 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:21:12.260 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:21:12.260 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:21:12.260 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:21:12.260 Program doxygen found: YES (/bin/doxygen) 00:21:12.260 Configuring doxy-api-html.conf using configuration 00:21:12.260 Configuring doxy-api-man.conf using configuration 00:21:12.261 Program mandb found: YES (/bin/mandb) 00:21:12.261 Program sphinx-build found: NO 00:21:12.261 Configuring rte_build_config.h using configuration 00:21:12.261 Message: 00:21:12.261 ================= 00:21:12.261 Applications Enabled 00:21:12.261 ================= 00:21:12.261 00:21:12.261 apps: 00:21:12.261 00:21:12.261 00:21:12.261 Message: 00:21:12.261 ================= 00:21:12.261 Libraries Enabled 00:21:12.261 ================= 00:21:12.261 00:21:12.261 libs: 00:21:12.261 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:21:12.261 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:21:12.261 cryptodev, dmadev, power, reorder, security, vhost, 00:21:12.261 00:21:12.261 Message: 00:21:12.261 =============== 00:21:12.261 Drivers Enabled 00:21:12.261 =============== 00:21:12.261 00:21:12.261 common: 00:21:12.261 00:21:12.261 bus: 00:21:12.261 pci, vdev, 00:21:12.261 mempool: 00:21:12.261 ring, 00:21:12.261 dma: 00:21:12.261 00:21:12.261 net: 00:21:12.261 00:21:12.261 crypto: 00:21:12.261 00:21:12.261 compress: 00:21:12.261 00:21:12.261 vdpa: 00:21:12.261 00:21:12.261 00:21:12.261 Message: 00:21:12.261 ================= 00:21:12.261 Content Skipped 00:21:12.261 ================= 00:21:12.261 00:21:12.261 apps: 00:21:12.261 dumpcap: explicitly disabled via build config 00:21:12.261 graph: explicitly disabled via build config 00:21:12.261 pdump: explicitly disabled via build config 00:21:12.261 proc-info: explicitly disabled via build config 00:21:12.261 test-acl: explicitly disabled via build config 00:21:12.261 test-bbdev: explicitly disabled via build config 00:21:12.261 test-cmdline: explicitly disabled via build config 00:21:12.261 test-compress-perf: explicitly disabled via build config 00:21:12.261 test-crypto-perf: explicitly disabled via build config 00:21:12.261 test-dma-perf: explicitly disabled via build config 00:21:12.261 test-eventdev: explicitly disabled via build config 00:21:12.261 test-fib: explicitly disabled via build config 00:21:12.261 test-flow-perf: explicitly disabled via build config 00:21:12.261 test-gpudev: explicitly disabled via build config 00:21:12.261 test-mldev: explicitly disabled via build config 00:21:12.261 test-pipeline: explicitly disabled via build config 00:21:12.261 test-pmd: explicitly disabled via build config 00:21:12.261 test-regex: explicitly disabled via build config 00:21:12.261 test-sad: explicitly disabled via build config 00:21:12.261 test-security-perf: explicitly disabled via build config 00:21:12.261 00:21:12.261 libs: 00:21:12.261 metrics: explicitly disabled via build config 00:21:12.261 acl: explicitly disabled via build config 00:21:12.261 bbdev: explicitly disabled via build config 00:21:12.261 bitratestats: explicitly disabled via build config 00:21:12.261 bpf: explicitly disabled via build config 00:21:12.261 cfgfile: explicitly disabled via build config 00:21:12.261 distributor: explicitly disabled via build config 00:21:12.261 efd: explicitly disabled via build config 00:21:12.261 eventdev: explicitly disabled via build config 00:21:12.261 dispatcher: explicitly disabled via build config 00:21:12.261 gpudev: explicitly disabled via build config 00:21:12.261 gro: explicitly disabled via build config 00:21:12.261 gso: explicitly disabled via build config 00:21:12.261 ip_frag: explicitly disabled via build config 00:21:12.261 jobstats: explicitly disabled via build config 00:21:12.261 latencystats: explicitly disabled via build config 00:21:12.261 lpm: explicitly disabled via build config 00:21:12.261 member: explicitly disabled via build config 00:21:12.261 pcapng: explicitly disabled via build config 00:21:12.261 rawdev: explicitly disabled via build config 00:21:12.261 regexdev: explicitly disabled via build config 00:21:12.261 mldev: explicitly disabled via build config 00:21:12.261 rib: explicitly disabled via build config 00:21:12.261 sched: explicitly disabled via build config 00:21:12.261 stack: explicitly disabled via build config 00:21:12.261 ipsec: explicitly disabled via build config 00:21:12.261 pdcp: explicitly disabled via build config 00:21:12.261 fib: explicitly disabled via build config 00:21:12.261 port: explicitly disabled via build config 00:21:12.261 pdump: explicitly disabled via build config 00:21:12.261 table: explicitly disabled via build config 00:21:12.261 pipeline: explicitly disabled via build config 00:21:12.261 graph: explicitly disabled via build config 00:21:12.261 node: explicitly disabled via build config 00:21:12.261 00:21:12.261 drivers: 00:21:12.261 common/cpt: not in enabled drivers build config 00:21:12.261 common/dpaax: not in enabled drivers build config 00:21:12.261 common/iavf: not in enabled drivers build config 00:21:12.261 common/idpf: not in enabled drivers build config 00:21:12.261 common/mvep: not in enabled drivers build config 00:21:12.261 common/octeontx: not in enabled drivers build config 00:21:12.261 bus/auxiliary: not in enabled drivers build config 00:21:12.261 bus/cdx: not in enabled drivers build config 00:21:12.261 bus/dpaa: not in enabled drivers build config 00:21:12.261 bus/fslmc: not in enabled drivers build config 00:21:12.262 bus/ifpga: not in enabled drivers build config 00:21:12.262 bus/platform: not in enabled drivers build config 00:21:12.262 bus/vmbus: not in enabled drivers build config 00:21:12.262 common/cnxk: not in enabled drivers build config 00:21:12.262 common/mlx5: not in enabled drivers build config 00:21:12.262 common/nfp: not in enabled drivers build config 00:21:12.262 common/qat: not in enabled drivers build config 00:21:12.262 common/sfc_efx: not in enabled drivers build config 00:21:12.262 mempool/bucket: not in enabled drivers build config 00:21:12.262 mempool/cnxk: not in enabled drivers build config 00:21:12.262 mempool/dpaa: not in enabled drivers build config 00:21:12.262 mempool/dpaa2: not in enabled drivers build config 00:21:12.262 mempool/octeontx: not in enabled drivers build config 00:21:12.262 mempool/stack: not in enabled drivers build config 00:21:12.262 dma/cnxk: not in enabled drivers build config 00:21:12.262 dma/dpaa: not in enabled drivers build config 00:21:12.262 dma/dpaa2: not in enabled drivers build config 00:21:12.262 dma/hisilicon: not in enabled drivers build config 00:21:12.262 dma/idxd: not in enabled drivers build config 00:21:12.262 dma/ioat: not in enabled drivers build config 00:21:12.262 dma/skeleton: not in enabled drivers build config 00:21:12.262 net/af_packet: not in enabled drivers build config 00:21:12.262 net/af_xdp: not in enabled drivers build config 00:21:12.262 net/ark: not in enabled drivers build config 00:21:12.262 net/atlantic: not in enabled drivers build config 00:21:12.262 net/avp: not in enabled drivers build config 00:21:12.262 net/axgbe: not in enabled drivers build config 00:21:12.262 net/bnx2x: not in enabled drivers build config 00:21:12.262 net/bnxt: not in enabled drivers build config 00:21:12.262 net/bonding: not in enabled drivers build config 00:21:12.262 net/cnxk: not in enabled drivers build config 00:21:12.262 net/cpfl: not in enabled drivers build config 00:21:12.262 net/cxgbe: not in enabled drivers build config 00:21:12.262 net/dpaa: not in enabled drivers build config 00:21:12.262 net/dpaa2: not in enabled drivers build config 00:21:12.262 net/e1000: not in enabled drivers build config 00:21:12.262 net/ena: not in enabled drivers build config 00:21:12.262 net/enetc: not in enabled drivers build config 00:21:12.262 net/enetfec: not in enabled drivers build config 00:21:12.262 net/enic: not in enabled drivers build config 00:21:12.262 net/failsafe: not in enabled drivers build config 00:21:12.262 net/fm10k: not in enabled drivers build config 00:21:12.262 net/gve: not in enabled drivers build config 00:21:12.262 net/hinic: not in enabled drivers build config 00:21:12.262 net/hns3: not in enabled drivers build config 00:21:12.262 net/i40e: not in enabled drivers build config 00:21:12.262 net/iavf: not in enabled drivers build config 00:21:12.262 net/ice: not in enabled drivers build config 00:21:12.262 net/idpf: not in enabled drivers build config 00:21:12.262 net/igc: not in enabled drivers build config 00:21:12.262 net/ionic: not in enabled drivers build config 00:21:12.262 net/ipn3ke: not in enabled drivers build config 00:21:12.262 net/ixgbe: not in enabled drivers build config 00:21:12.262 net/mana: not in enabled drivers build config 00:21:12.262 net/memif: not in enabled drivers build config 00:21:12.262 net/mlx4: not in enabled drivers build config 00:21:12.262 net/mlx5: not in enabled drivers build config 00:21:12.262 net/mvneta: not in enabled drivers build config 00:21:12.262 net/mvpp2: not in enabled drivers build config 00:21:12.262 net/netvsc: not in enabled drivers build config 00:21:12.262 net/nfb: not in enabled drivers build config 00:21:12.262 net/nfp: not in enabled drivers build config 00:21:12.262 net/ngbe: not in enabled drivers build config 00:21:12.262 net/null: not in enabled drivers build config 00:21:12.262 net/octeontx: not in enabled drivers build config 00:21:12.262 net/octeon_ep: not in enabled drivers build config 00:21:12.262 net/pcap: not in enabled drivers build config 00:21:12.262 net/pfe: not in enabled drivers build config 00:21:12.262 net/qede: not in enabled drivers build config 00:21:12.262 net/ring: not in enabled drivers build config 00:21:12.262 net/sfc: not in enabled drivers build config 00:21:12.262 net/softnic: not in enabled drivers build config 00:21:12.262 net/tap: not in enabled drivers build config 00:21:12.262 net/thunderx: not in enabled drivers build config 00:21:12.262 net/txgbe: not in enabled drivers build config 00:21:12.262 net/vdev_netvsc: not in enabled drivers build config 00:21:12.262 net/vhost: not in enabled drivers build config 00:21:12.262 net/virtio: not in enabled drivers build config 00:21:12.262 net/vmxnet3: not in enabled drivers build config 00:21:12.262 raw/*: missing internal dependency, "rawdev" 00:21:12.262 crypto/armv8: not in enabled drivers build config 00:21:12.262 crypto/bcmfs: not in enabled drivers build config 00:21:12.262 crypto/caam_jr: not in enabled drivers build config 00:21:12.262 crypto/ccp: not in enabled drivers build config 00:21:12.262 crypto/cnxk: not in enabled drivers build config 00:21:12.262 crypto/dpaa_sec: not in enabled drivers build config 00:21:12.262 crypto/dpaa2_sec: not in enabled drivers build config 00:21:12.262 crypto/ipsec_mb: not in enabled drivers build config 00:21:12.262 crypto/mlx5: not in enabled drivers build config 00:21:12.262 crypto/mvsam: not in enabled drivers build config 00:21:12.262 crypto/nitrox: not in enabled drivers build config 00:21:12.262 crypto/null: not in enabled drivers build config 00:21:12.262 crypto/octeontx: not in enabled drivers build config 00:21:12.262 crypto/openssl: not in enabled drivers build config 00:21:12.262 crypto/scheduler: not in enabled drivers build config 00:21:12.262 crypto/uadk: not in enabled drivers build config 00:21:12.262 crypto/virtio: not in enabled drivers build config 00:21:12.262 compress/isal: not in enabled drivers build config 00:21:12.262 compress/mlx5: not in enabled drivers build config 00:21:12.262 compress/octeontx: not in enabled drivers build config 00:21:12.262 compress/zlib: not in enabled drivers build config 00:21:12.262 regex/*: missing internal dependency, "regexdev" 00:21:12.262 ml/*: missing internal dependency, "mldev" 00:21:12.262 vdpa/ifc: not in enabled drivers build config 00:21:12.262 vdpa/mlx5: not in enabled drivers build config 00:21:12.262 vdpa/nfp: not in enabled drivers build config 00:21:12.262 vdpa/sfc: not in enabled drivers build config 00:21:12.262 event/*: missing internal dependency, "eventdev" 00:21:12.263 baseband/*: missing internal dependency, "bbdev" 00:21:12.263 gpu/*: missing internal dependency, "gpudev" 00:21:12.263 00:21:12.263 00:21:12.833 Build targets in project: 85 00:21:12.833 00:21:12.833 DPDK 23.11.0 00:21:12.833 00:21:12.833 User defined options 00:21:12.833 default_library : static 00:21:12.833 libdir : lib 00:21:12.833 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:21:12.833 b_lto : true 00:21:12.833 b_sanitize : address 00:21:12.833 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:21:12.833 c_link_args : 00:21:12.833 cpu_instruction_set: native 00:21:12.833 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:21:12.833 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:21:12.833 enable_docs : false 00:21:12.833 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:21:12.833 enable_kmods : false 00:21:12.833 tests : false 00:21:12.833 00:21:12.833 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:21:12.833 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:21:13.403 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:21:13.403 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:21:13.403 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:21:13.403 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:21:13.403 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:21:13.403 [5/264] Linking static target lib/librte_kvargs.a 00:21:13.403 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:21:13.403 [7/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:21:13.403 [8/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:21:13.403 [9/264] Linking static target lib/librte_log.a 00:21:13.403 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:21:13.403 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:21:13.663 [12/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:21:13.663 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:21:13.663 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:21:13.663 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:21:13.663 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:21:13.922 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:21:13.923 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:21:13.923 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:21:13.923 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:21:13.923 [21/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:21:13.923 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:21:13.923 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:21:14.182 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:21:14.182 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:21:14.182 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:21:14.182 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:21:14.182 [28/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:21:14.182 [29/264] Linking static target lib/librte_telemetry.a 00:21:14.182 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:21:14.442 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:21:14.442 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:21:14.442 [33/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:21:14.442 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:21:14.442 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:21:14.442 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:21:14.442 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:21:14.442 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:21:14.702 [39/264] Linking target lib/librte_log.so.24.0 00:21:14.702 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:21:14.702 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:21:14.702 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:21:14.702 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:21:14.962 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:21:14.962 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:21:14.962 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:21:14.962 [47/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:21:14.962 [48/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:21:14.962 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:21:14.962 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:21:15.291 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:21:15.291 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:21:15.291 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:21:15.291 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:21:15.292 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:21:15.292 [56/264] Linking target lib/librte_kvargs.so.24.0 00:21:15.292 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:21:15.292 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:21:15.292 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:21:15.292 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:21:15.292 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:21:15.292 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:21:15.292 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:21:15.292 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:21:15.551 [65/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:21:15.551 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:21:15.551 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:21:15.551 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:21:15.551 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:21:15.812 [70/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:21:15.812 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:21:15.812 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:21:15.812 [73/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:21:15.812 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:21:15.812 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:21:15.812 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:21:15.812 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:21:16.071 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:21:16.071 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:21:16.071 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:21:16.071 [81/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:21:16.071 [82/264] Linking static target lib/librte_ring.a 00:21:16.071 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:21:16.072 [84/264] Linking target lib/librte_telemetry.so.24.0 00:21:16.331 [85/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:21:16.331 [86/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:21:16.331 [87/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:21:16.331 [88/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:21:16.590 [89/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:21:16.590 [90/264] Linking static target lib/librte_eal.a 00:21:16.591 [91/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:21:16.591 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:21:16.591 [93/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:21:16.591 [94/264] Linking static target lib/librte_mempool.a 00:21:16.591 [95/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:21:16.591 [96/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:21:16.850 [97/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:21:16.850 [98/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:21:16.850 [99/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:21:16.850 [100/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:21:16.850 [101/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:21:16.850 [102/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:21:16.850 [103/264] Linking static target lib/librte_rcu.a 00:21:16.850 [104/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:21:17.109 [105/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:21:17.109 [106/264] Linking static target lib/librte_meter.a 00:21:17.109 [107/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:21:17.109 [108/264] Linking static target lib/librte_net.a 00:21:17.109 [109/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:21:17.369 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:21:17.369 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:21:17.369 [112/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:21:17.628 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:21:17.628 [114/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:21:17.629 [115/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:21:17.629 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:21:17.629 [117/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:21:17.887 [118/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:21:17.887 [119/264] Linking static target lib/librte_mbuf.a 00:21:17.887 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:21:18.146 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:21:18.146 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:21:18.405 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:21:18.405 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:21:18.405 [125/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:21:18.405 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:21:18.405 [127/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:21:18.405 [128/264] Linking static target lib/librte_pci.a 00:21:18.405 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:21:18.664 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:21:18.664 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:21:18.664 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:21:18.664 [133/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:21:18.664 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:21:18.664 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:21:18.664 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:21:18.664 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:21:18.924 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:21:18.924 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:21:18.924 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:21:18.924 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:21:18.924 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:21:18.924 [143/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:21:19.182 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:21:19.182 [145/264] Linking static target lib/librte_cmdline.a 00:21:19.182 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:21:19.182 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:21:19.182 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:21:19.441 [149/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:21:19.441 [150/264] Linking static target lib/librte_timer.a 00:21:19.441 [151/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:21:19.441 [152/264] Linking static target lib/librte_compressdev.a 00:21:19.441 [153/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:21:19.701 [154/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:21:19.701 [155/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:21:19.701 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:21:19.959 [157/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:21:19.959 [158/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:21:19.959 [159/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:21:19.959 [160/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:21:20.219 [161/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:21:20.219 [162/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:20.219 [163/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:21:20.219 [164/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:21:20.219 [165/264] Linking static target lib/librte_dmadev.a 00:21:20.478 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:21:20.478 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:21:20.478 [168/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:21:20.738 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:21:20.738 [170/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:21:20.738 [171/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:21:20.738 [172/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:21:20.738 [173/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:21:20.997 [174/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:20.997 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:21:20.997 [176/264] Linking static target lib/librte_power.a 00:21:21.256 [177/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:21:21.256 [178/264] Linking static target lib/librte_security.a 00:21:21.256 [179/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:21:21.256 [180/264] Linking static target lib/librte_reorder.a 00:21:21.256 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:21:21.256 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:21:21.514 [183/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:21:21.774 [184/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:21:21.774 [185/264] Linking static target lib/librte_cryptodev.a 00:21:21.774 [186/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:21:21.774 [187/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:21:21.774 [188/264] Linking static target lib/librte_ethdev.a 00:21:21.774 [189/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:21:22.034 [190/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:21:22.293 [191/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:21:22.551 [192/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:21:22.551 [193/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:21:22.551 [194/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:21:22.551 [195/264] Linking static target lib/librte_hash.a 00:21:22.812 [196/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:21:22.812 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:21:22.812 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:21:23.071 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:21:23.330 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:21:23.330 [201/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:23.330 [202/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:21:23.330 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:21:23.330 [204/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:21:23.330 [205/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:21:23.330 [206/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:21:23.589 [207/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:21:23.589 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:21:23.589 [209/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:21:23.589 [210/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:21:23.589 [211/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:21:23.589 [212/264] Linking static target drivers/librte_bus_vdev.a 00:21:23.589 [213/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:21:23.589 [214/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:21:23.589 [215/264] Linking static target drivers/librte_bus_pci.a 00:21:23.848 [216/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:21:23.848 [217/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:21:23.848 [218/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:21:24.107 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:21:24.107 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:21:24.107 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:21:24.107 [222/264] Linking static target drivers/librte_mempool_ring.a 00:21:24.107 [223/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:24.673 [224/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:21:29.986 [225/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:33.291 [226/264] Linking target lib/librte_eal.so.24.0 00:21:33.558 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:21:33.817 [228/264] Linking target lib/librte_meter.so.24.0 00:21:33.817 [229/264] Linking target lib/librte_pci.so.24.0 00:21:33.817 [230/264] Linking target lib/librte_ring.so.24.0 00:21:33.817 [231/264] Linking target drivers/librte_bus_vdev.so.24.0 00:21:33.817 [232/264] Linking target lib/librte_timer.so.24.0 00:21:34.076 [233/264] Linking target lib/librte_dmadev.so.24.0 00:21:34.076 [234/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:21:34.076 [235/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:21:34.337 [236/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:21:34.337 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:21:34.596 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:21:34.857 [239/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:21:35.115 [240/264] Linking target lib/librte_rcu.so.24.0 00:21:35.115 [241/264] Linking target lib/librte_mempool.so.24.0 00:21:35.374 [242/264] Linking target drivers/librte_bus_pci.so.24.0 00:21:35.374 [243/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:21:35.374 [244/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:21:35.954 [245/264] Linking target drivers/librte_mempool_ring.so.24.0 00:21:37.331 [246/264] Linking target lib/librte_mbuf.so.24.0 00:21:37.592 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:21:38.158 [248/264] Linking target lib/librte_reorder.so.24.0 00:21:38.158 [249/264] Linking target lib/librte_compressdev.so.24.0 00:21:38.421 [250/264] Linking target lib/librte_net.so.24.0 00:21:39.000 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:21:39.936 [252/264] Linking target lib/librte_cryptodev.so.24.0 00:21:39.936 In function '_mm256_storeu_si256', 00:21:39.936 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:347:2, 00:21:39.936 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../lib/eal/x86/include/rte_memcpy.h:868:10: 00:21:39.936 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:21:39.936 928 | *__P = __A; 00:21:39.936 | ^ 00:21:39.936 ../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:21:39.936 ../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:21:39.936 156 | uint8_t driver_priv_data[0]; 00:21:39.936 | ^ 00:21:39.936 In function '_mm_storeu_si128', 00:21:39.937 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:21:39.937 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../lib/eal/x86/include/rte_memcpy.h:868:10: 00:21:39.937 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:21:39.937 727 | *__P = __B; 00:21:39.937 | ^ 00:21:39.937 ../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:21:39.937 ../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:21:39.937 156 | uint8_t driver_priv_data[0]; 00:21:39.937 | ^ 00:21:39.937 In function '_mm_storeu_si128', 00:21:39.937 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:21:39.937 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../lib/eal/x86/include/rte_memcpy.h:868:10: 00:21:39.937 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:21:39.937 727 | *__P = __B; 00:21:39.937 | ^ 00:21:39.937 ../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:21:39.937 ../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:21:39.937 156 | uint8_t driver_priv_data[0]; 00:21:39.937 | ^ 00:21:39.937 In function '_mm256_storeu_si256', 00:21:39.937 inlined from 'rte_memcpy_aligned' at ../lib/eal/x86/include/rte_memcpy.h:347:2, 00:21:39.937 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../lib/eal/x86/include/rte_memcpy.h:866:10: 00:21:39.937 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:21:39.937 928 | *__P = __A; 00:21:39.937 | ^ 00:21:39.937 ../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:21:39.937 ../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:21:39.937 156 | uint8_t driver_priv_data[0]; 00:21:39.937 | ^ 00:21:39.937 In function '_mm_storeu_si128', 00:21:39.937 inlined from 'rte_memcpy_aligned' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:21:39.937 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../lib/eal/x86/include/rte_memcpy.h:866:10: 00:21:39.937 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:21:39.937 727 | *__P = __B; 00:21:39.937 | ^ 00:21:39.937 ../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:21:39.937 ../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:21:39.937 156 | uint8_t driver_priv_data[0]; 00:21:39.937 | ^ 00:21:39.937 [253/264] Linking target lib/librte_cmdline.so.24.0 00:21:40.196 [254/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:21:40.454 [255/264] Linking target lib/librte_security.so.24.0 00:21:43.741 [256/264] Linking target lib/librte_hash.so.24.0 00:21:43.741 In function '_mm256_storeu_si256', 00:21:43.741 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:347:2, 00:21:43.741 inlined from 'rte_thash_init_ctx' at ../lib/eal/x86/include/rte_memcpy.h:868:10, 00:21:43.741 inlined from 'rte_thash_init_ctx' at ../lib/hash/rte_thash.c:211:1: 00:21:43.741 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:21:43.741 928 | *__P = __A; 00:21:43.741 | ^ 00:21:43.741 ../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:21:43.741 ../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:21:43.741 91 | uint8_t hash_key[0]; 00:21:43.741 | ^ 00:21:43.741 In function '_mm_storeu_si128', 00:21:43.741 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:21:43.741 inlined from 'rte_thash_init_ctx' at ../lib/eal/x86/include/rte_memcpy.h:868:10, 00:21:43.741 inlined from 'rte_thash_init_ctx' at ../lib/hash/rte_thash.c:211:1: 00:21:43.741 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:21:43.741 727 | *__P = __B; 00:21:43.741 | ^ 00:21:43.741 ../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:21:43.741 ../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:21:43.741 91 | uint8_t hash_key[0]; 00:21:43.741 | ^ 00:21:43.741 In function '_mm_storeu_si128', 00:21:43.741 inlined from 'rte_memcpy_generic' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:21:43.741 inlined from 'rte_thash_init_ctx' at ../lib/eal/x86/include/rte_memcpy.h:868:10, 00:21:43.741 inlined from 'rte_thash_init_ctx' at ../lib/hash/rte_thash.c:211:1: 00:21:43.741 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:21:43.741 727 | *__P = __B; 00:21:43.741 | ^ 00:21:43.741 ../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:21:43.741 ../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:21:43.741 91 | uint8_t hash_key[0]; 00:21:43.741 | ^ 00:21:43.741 In function '_mm256_storeu_si256', 00:21:43.741 inlined from 'rte_memcpy_aligned' at ../lib/eal/x86/include/rte_memcpy.h:347:2, 00:21:43.741 inlined from 'rte_thash_init_ctx' at ../lib/eal/x86/include/rte_memcpy.h:866:10, 00:21:43.741 inlined from 'rte_thash_init_ctx' at ../lib/hash/rte_thash.c:211:1: 00:21:43.741 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:21:43.741 928 | *__P = __A; 00:21:43.741 | ^ 00:21:43.741 ../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:21:43.741 ../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:21:43.741 91 | uint8_t hash_key[0]; 00:21:43.741 | ^ 00:21:43.741 In function '_mm_storeu_si128', 00:21:43.741 inlined from 'rte_memcpy_aligned' at ../lib/eal/x86/include/rte_memcpy.h:334:2, 00:21:43.741 inlined from 'rte_thash_init_ctx' at ../lib/eal/x86/include/rte_memcpy.h:866:10, 00:21:43.741 inlined from 'rte_thash_init_ctx' at ../lib/hash/rte_thash.c:211:1: 00:21:43.741 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:21:43.741 727 | *__P = __B; 00:21:43.741 | ^ 00:21:43.741 ../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:21:43.741 ../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:21:43.741 91 | uint8_t hash_key[0]; 00:21:43.741 | ^ 00:21:43.741 [257/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:21:50.311 [258/264] Linking target lib/librte_ethdev.so.24.0 00:21:50.311 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:21:52.216 [260/264] Linking target lib/librte_power.so.24.0 00:21:58.780 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:21:58.780 [262/264] Linking static target lib/librte_vhost.a 00:22:00.684 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:22:56.918 [264/264] Linking target lib/librte_vhost.so.24.0 00:22:56.918 NOTICE: You are using Python 3.6 which is EOL. Starting with v0.62.0, Meson will require Python 3.7 or newer 00:22:56.918 CC lib/log/log.o 00:22:56.918 CC lib/ut_mock/mock.o 00:22:56.918 CC lib/log/log_flags.o 00:22:56.918 CC lib/ut/ut.o 00:22:56.918 CC lib/log/log_deprecated.o 00:22:56.918 LIB libspdk_ut_mock.a 00:22:56.918 LIB libspdk_log.a 00:22:56.918 LIB libspdk_ut.a 00:22:56.918 CXX lib/trace_parser/trace.o 00:22:56.918 CC lib/dma/dma.o 00:22:56.918 CC lib/util/base64.o 00:22:56.918 CC lib/ioat/ioat.o 00:22:56.918 CC lib/util/bit_array.o 00:22:56.918 CC lib/util/cpuset.o 00:22:56.918 CC lib/util/crc16.o 00:22:56.918 CC lib/util/crc32.o 00:22:56.918 CC lib/util/crc32c.o 00:22:56.918 CC lib/vfio_user/host/vfio_user_pci.o 00:22:56.918 LIB libspdk_dma.a 00:22:56.918 CC lib/util/crc32_ieee.o 00:22:56.918 CC lib/vfio_user/host/vfio_user.o 00:22:56.918 CC lib/util/crc64.o 00:22:56.918 CC lib/util/dif.o 00:22:56.918 CC lib/util/fd.o 00:22:56.918 CC lib/util/file.o 00:22:56.918 LIB libspdk_ioat.a 00:22:56.918 CC lib/util/hexlify.o 00:22:56.918 CC lib/util/iov.o 00:22:56.918 CC lib/util/math.o 00:22:56.918 LIB libspdk_vfio_user.a 00:22:56.918 CC lib/util/pipe.o 00:22:56.918 CC lib/util/strerror_tls.o 00:22:56.918 CC lib/util/string.o 00:22:56.918 CC lib/util/uuid.o 00:22:56.918 CC lib/util/fd_group.o 00:22:56.918 CC lib/util/xor.o 00:22:56.918 CC lib/util/zipf.o 00:22:56.919 LIB libspdk_trace_parser.a 00:22:56.919 LIB libspdk_util.a 00:22:56.919 CC lib/idxd/idxd.o 00:22:56.919 CC lib/conf/conf.o 00:22:56.919 CC lib/env_dpdk/env.o 00:22:56.919 CC lib/vmd/vmd.o 00:22:56.919 CC lib/json/json_parse.o 00:22:56.919 CC lib/env_dpdk/memory.o 00:22:56.919 CC lib/idxd/idxd_user.o 00:22:56.919 CC lib/vmd/led.o 00:22:56.919 CC lib/rdma/common.o 00:22:56.919 CC lib/json/json_util.o 00:22:56.919 CC lib/rdma/rdma_verbs.o 00:22:56.919 CC lib/json/json_write.o 00:22:56.919 LIB libspdk_conf.a 00:22:56.919 CC lib/env_dpdk/pci.o 00:22:56.919 CC lib/env_dpdk/init.o 00:22:56.919 CC lib/env_dpdk/threads.o 00:22:56.919 CC lib/env_dpdk/pci_ioat.o 00:22:56.919 LIB libspdk_idxd.a 00:22:56.919 LIB libspdk_vmd.a 00:22:56.919 CC lib/env_dpdk/pci_virtio.o 00:22:56.919 CC lib/env_dpdk/pci_vmd.o 00:22:56.919 CC lib/env_dpdk/pci_idxd.o 00:22:56.919 LIB libspdk_rdma.a 00:22:56.919 CC lib/env_dpdk/pci_event.o 00:22:56.919 LIB libspdk_json.a 00:22:56.919 CC lib/env_dpdk/sigbus_handler.o 00:22:56.919 CC lib/env_dpdk/pci_dpdk.o 00:22:56.919 CC lib/env_dpdk/pci_dpdk_2207.o 00:22:56.919 CC lib/env_dpdk/pci_dpdk_2211.o 00:22:56.919 CC lib/jsonrpc/jsonrpc_server.o 00:22:56.919 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:22:56.919 CC lib/jsonrpc/jsonrpc_client.o 00:22:56.919 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:22:56.919 LIB libspdk_jsonrpc.a 00:22:56.919 LIB libspdk_env_dpdk.a 00:22:56.919 CC lib/rpc/rpc.o 00:22:56.919 LIB libspdk_rpc.a 00:22:56.919 CC lib/notify/notify.o 00:22:56.919 CC lib/sock/sock.o 00:22:56.919 CC lib/trace/trace.o 00:22:56.919 CC lib/notify/notify_rpc.o 00:22:56.919 CC lib/sock/sock_rpc.o 00:22:56.919 CC lib/trace/trace_flags.o 00:22:56.919 CC lib/trace/trace_rpc.o 00:22:56.919 LIB libspdk_notify.a 00:22:56.919 LIB libspdk_trace.a 00:22:56.919 LIB libspdk_sock.a 00:22:56.919 CC lib/thread/thread.o 00:22:56.919 CC lib/thread/iobuf.o 00:22:56.919 CC lib/nvme/nvme_ctrlr_cmd.o 00:22:56.919 CC lib/nvme/nvme_ctrlr.o 00:22:56.919 CC lib/nvme/nvme_fabric.o 00:22:56.919 CC lib/nvme/nvme_ns_cmd.o 00:22:56.919 CC lib/nvme/nvme_ns.o 00:22:56.919 CC lib/nvme/nvme_pcie_common.o 00:22:56.919 CC lib/nvme/nvme_pcie.o 00:22:56.919 CC lib/nvme/nvme_qpair.o 00:22:56.919 CC lib/nvme/nvme.o 00:22:56.919 LIB libspdk_thread.a 00:22:56.919 CC lib/nvme/nvme_quirks.o 00:22:56.919 CC lib/nvme/nvme_transport.o 00:22:56.919 CC lib/nvme/nvme_discovery.o 00:22:56.919 CC lib/accel/accel.o 00:22:56.919 CC lib/blob/blobstore.o 00:22:56.919 CC lib/init/json_config.o 00:22:56.919 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:22:56.919 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:22:56.919 CC lib/virtio/virtio.o 00:22:56.919 CC lib/init/subsystem.o 00:22:56.919 CC lib/virtio/virtio_vhost_user.o 00:22:56.919 CC lib/accel/accel_rpc.o 00:22:56.919 CC lib/nvme/nvme_tcp.o 00:22:56.919 CC lib/virtio/virtio_vfio_user.o 00:22:56.919 CC lib/init/subsystem_rpc.o 00:22:56.919 CC lib/nvme/nvme_opal.o 00:22:56.919 CC lib/nvme/nvme_io_msg.o 00:22:56.919 CC lib/virtio/virtio_pci.o 00:22:56.919 CC lib/accel/accel_sw.o 00:22:56.919 CC lib/blob/request.o 00:22:56.919 CC lib/init/rpc.o 00:22:56.919 CC lib/nvme/nvme_poll_group.o 00:22:56.919 CC lib/nvme/nvme_zns.o 00:22:56.919 LIB libspdk_init.a 00:22:56.919 LIB libspdk_virtio.a 00:22:56.919 LIB libspdk_accel.a 00:22:56.919 CC lib/blob/zeroes.o 00:22:56.919 CC lib/blob/blob_bs_dev.o 00:22:56.919 CC lib/nvme/nvme_cuse.o 00:22:56.919 CC lib/nvme/nvme_vfio_user.o 00:22:56.919 CC lib/event/app.o 00:22:56.919 CC lib/nvme/nvme_rdma.o 00:22:56.919 CC lib/bdev/bdev.o 00:22:56.919 LIB libspdk_blob.a 00:22:56.919 CC lib/bdev/bdev_rpc.o 00:22:56.919 CC lib/lvol/lvol.o 00:22:56.919 CC lib/blobfs/blobfs.o 00:22:56.919 CC lib/event/reactor.o 00:22:56.919 CC lib/blobfs/tree.o 00:22:56.919 CC lib/event/log_rpc.o 00:22:56.919 CC lib/bdev/bdev_zone.o 00:22:56.919 CC lib/bdev/part.o 00:22:56.919 CC lib/event/app_rpc.o 00:22:56.919 CC lib/bdev/scsi_nvme.o 00:22:56.919 CC lib/event/scheduler_static.o 00:22:56.919 LIB libspdk_blobfs.a 00:22:56.919 LIB libspdk_lvol.a 00:22:56.919 LIB libspdk_event.a 00:22:56.919 LIB libspdk_nvme.a 00:22:56.919 LIB libspdk_bdev.a 00:22:56.919 CC lib/nvmf/ctrlr.o 00:22:56.919 CC lib/nbd/nbd.o 00:22:56.919 CC lib/nvmf/ctrlr_discovery.o 00:22:56.919 CC lib/nbd/nbd_rpc.o 00:22:56.919 CC lib/nvmf/ctrlr_bdev.o 00:22:56.919 CC lib/scsi/dev.o 00:22:56.919 CC lib/ftl/ftl_core.o 00:22:56.919 CC lib/scsi/lun.o 00:22:56.919 CC lib/nvmf/subsystem.o 00:22:56.919 CC lib/nvmf/nvmf.o 00:22:56.919 CC lib/scsi/port.o 00:22:56.919 CC lib/nvmf/nvmf_rpc.o 00:22:56.919 CC lib/nvmf/transport.o 00:22:56.919 LIB libspdk_nbd.a 00:22:56.919 CC lib/nvmf/tcp.o 00:22:56.919 CC lib/scsi/scsi.o 00:22:56.919 CC lib/ftl/ftl_init.o 00:22:56.919 CC lib/nvmf/rdma.o 00:22:56.919 CC lib/ftl/ftl_layout.o 00:22:56.919 CC lib/scsi/scsi_bdev.o 00:22:56.919 CC lib/scsi/scsi_pr.o 00:22:56.919 CC lib/ftl/ftl_debug.o 00:22:56.919 CC lib/ftl/ftl_io.o 00:22:56.919 CC lib/scsi/scsi_rpc.o 00:22:56.919 CC lib/scsi/task.o 00:22:56.919 CC lib/ftl/ftl_sb.o 00:22:56.919 CC lib/ftl/ftl_l2p.o 00:22:56.919 CC lib/ftl/ftl_l2p_flat.o 00:22:56.919 CC lib/ftl/ftl_nv_cache.o 00:22:56.919 CC lib/ftl/ftl_band.o 00:22:56.919 CC lib/ftl/ftl_band_ops.o 00:22:56.919 CC lib/ftl/ftl_writer.o 00:22:56.919 LIB libspdk_scsi.a 00:22:56.919 CC lib/ftl/ftl_rq.o 00:22:56.919 CC lib/ftl/ftl_reloc.o 00:22:56.919 CC lib/ftl/ftl_l2p_cache.o 00:22:56.919 CC lib/iscsi/conn.o 00:22:56.919 CC lib/ftl/ftl_p2l.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt.o 00:22:56.919 CC lib/iscsi/init_grp.o 00:22:56.919 LIB libspdk_nvmf.a 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:22:56.919 CC lib/vhost/vhost.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_startup.o 00:22:56.919 CC lib/iscsi/iscsi.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_md.o 00:22:56.919 CC lib/iscsi/md5.o 00:22:56.919 CC lib/vhost/vhost_rpc.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_misc.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:22:56.919 CC lib/iscsi/param.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_band.o 00:22:56.919 CC lib/iscsi/portal_grp.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:22:56.919 CC lib/iscsi/tgt_node.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:22:56.919 CC lib/vhost/vhost_scsi.o 00:22:56.919 CC lib/iscsi/iscsi_subsystem.o 00:22:56.919 CC lib/iscsi/iscsi_rpc.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:22:56.919 CC lib/iscsi/task.o 00:22:56.919 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:22:56.919 CC lib/vhost/vhost_blk.o 00:22:56.919 CC lib/vhost/rte_vhost_user.o 00:22:56.919 CC lib/ftl/utils/ftl_conf.o 00:22:56.919 CC lib/ftl/utils/ftl_md.o 00:22:56.919 CC lib/ftl/utils/ftl_mempool.o 00:22:56.919 CC lib/ftl/utils/ftl_bitmap.o 00:22:56.919 CC lib/ftl/utils/ftl_property.o 00:22:56.919 LIB libspdk_iscsi.a 00:22:56.919 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:22:56.919 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:22:56.919 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:22:56.919 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:22:56.919 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:22:56.919 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:22:56.919 CC lib/ftl/upgrade/ftl_sb_v3.o 00:22:56.919 CC lib/ftl/upgrade/ftl_sb_v5.o 00:22:56.919 CC lib/ftl/nvc/ftl_nvc_dev.o 00:22:56.919 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:22:56.919 CC lib/ftl/base/ftl_base_dev.o 00:22:56.919 CC lib/ftl/base/ftl_base_bdev.o 00:22:56.919 LIB libspdk_ftl.a 00:22:56.919 LIB libspdk_vhost.a 00:22:57.179 CC module/env_dpdk/env_dpdk_rpc.o 00:22:57.179 CC module/scheduler/gscheduler/gscheduler.o 00:22:57.179 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:22:57.179 CC module/accel/iaa/accel_iaa.o 00:22:57.179 CC module/scheduler/dynamic/scheduler_dynamic.o 00:22:57.179 CC module/accel/dsa/accel_dsa.o 00:22:57.179 CC module/sock/posix/posix.o 00:22:57.179 CC module/blob/bdev/blob_bdev.o 00:22:57.179 CC module/accel/error/accel_error.o 00:22:57.179 CC module/accel/ioat/accel_ioat.o 00:22:57.438 LIB libspdk_env_dpdk_rpc.a 00:22:57.438 CC module/accel/ioat/accel_ioat_rpc.o 00:22:57.438 LIB libspdk_scheduler_dpdk_governor.a 00:22:57.438 LIB libspdk_scheduler_dynamic.a 00:22:57.438 LIB libspdk_scheduler_gscheduler.a 00:22:57.438 CC module/accel/dsa/accel_dsa_rpc.o 00:22:57.438 CC module/accel/iaa/accel_iaa_rpc.o 00:22:57.438 CC module/accel/error/accel_error_rpc.o 00:22:57.438 LIB libspdk_blob_bdev.a 00:22:57.438 LIB libspdk_accel_ioat.a 00:22:57.438 LIB libspdk_accel_iaa.a 00:22:57.438 LIB libspdk_accel_dsa.a 00:22:57.438 LIB libspdk_accel_error.a 00:22:57.438 LIB libspdk_sock_posix.a 00:22:57.696 CC module/blobfs/bdev/blobfs_bdev.o 00:22:57.696 CC module/bdev/error/vbdev_error.o 00:22:57.696 CC module/bdev/gpt/gpt.o 00:22:57.696 CC module/bdev/delay/vbdev_delay.o 00:22:57.696 CC module/bdev/lvol/vbdev_lvol.o 00:22:57.696 CC module/bdev/malloc/bdev_malloc.o 00:22:57.696 CC module/bdev/null/bdev_null.o 00:22:57.696 CC module/bdev/nvme/bdev_nvme.o 00:22:57.697 CC module/bdev/passthru/vbdev_passthru.o 00:22:57.697 CC module/bdev/raid/bdev_raid.o 00:22:57.697 CC module/bdev/gpt/vbdev_gpt.o 00:22:57.697 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:22:57.697 CC module/bdev/error/vbdev_error_rpc.o 00:22:57.697 CC module/bdev/null/bdev_null_rpc.o 00:22:57.697 CC module/bdev/malloc/bdev_malloc_rpc.o 00:22:57.697 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:22:57.697 CC module/bdev/delay/vbdev_delay_rpc.o 00:22:57.697 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:22:57.697 LIB libspdk_blobfs_bdev.a 00:22:57.697 LIB libspdk_bdev_error.a 00:22:57.956 CC module/bdev/raid/bdev_raid_rpc.o 00:22:57.956 LIB libspdk_bdev_gpt.a 00:22:57.956 LIB libspdk_bdev_null.a 00:22:57.956 LIB libspdk_bdev_malloc.a 00:22:57.956 LIB libspdk_bdev_delay.a 00:22:57.956 LIB libspdk_bdev_passthru.a 00:22:57.956 CC module/bdev/raid/bdev_raid_sb.o 00:22:57.956 CC module/bdev/split/vbdev_split.o 00:22:57.956 CC module/bdev/aio/bdev_aio.o 00:22:57.956 CC module/bdev/ftl/bdev_ftl.o 00:22:57.956 CC module/bdev/zone_block/vbdev_zone_block.o 00:22:57.956 LIB libspdk_bdev_lvol.a 00:22:57.956 CC module/bdev/daos/bdev_daos.o 00:22:57.956 CC module/bdev/ftl/bdev_ftl_rpc.o 00:22:57.956 CC module/bdev/virtio/bdev_virtio_scsi.o 00:22:57.956 CC module/bdev/split/vbdev_split_rpc.o 00:22:57.956 CC module/bdev/raid/raid0.o 00:22:57.956 CC module/bdev/aio/bdev_aio_rpc.o 00:22:58.214 LIB libspdk_bdev_split.a 00:22:58.214 CC module/bdev/raid/raid1.o 00:22:58.214 CC module/bdev/raid/concat.o 00:22:58.214 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:22:58.214 CC module/bdev/daos/bdev_daos_rpc.o 00:22:58.214 CC module/bdev/nvme/bdev_nvme_rpc.o 00:22:58.214 LIB libspdk_bdev_aio.a 00:22:58.214 LIB libspdk_bdev_ftl.a 00:22:58.214 CC module/bdev/nvme/nvme_rpc.o 00:22:58.214 CC module/bdev/nvme/bdev_mdns_client.o 00:22:58.214 CC module/bdev/virtio/bdev_virtio_blk.o 00:22:58.214 CC module/bdev/nvme/vbdev_opal.o 00:22:58.214 CC module/bdev/virtio/bdev_virtio_rpc.o 00:22:58.214 CC module/bdev/nvme/vbdev_opal_rpc.o 00:22:58.214 LIB libspdk_bdev_zone_block.a 00:22:58.214 LIB libspdk_bdev_raid.a 00:22:58.214 LIB libspdk_bdev_daos.a 00:22:58.214 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:22:58.473 LIB libspdk_bdev_virtio.a 00:22:58.473 LIB libspdk_bdev_nvme.a 00:22:58.733 CC module/event/subsystems/sock/sock.o 00:22:58.733 CC module/event/subsystems/vmd/vmd.o 00:22:58.733 CC module/event/subsystems/scheduler/scheduler.o 00:22:58.733 CC module/event/subsystems/iobuf/iobuf.o 00:22:58.733 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:22:58.733 CC module/event/subsystems/vmd/vmd_rpc.o 00:22:58.733 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:22:58.991 LIB libspdk_event_scheduler.a 00:22:58.991 LIB libspdk_event_sock.a 00:22:58.991 LIB libspdk_event_vhost_blk.a 00:22:58.991 LIB libspdk_event_iobuf.a 00:22:58.991 LIB libspdk_event_vmd.a 00:22:58.991 CC module/event/subsystems/accel/accel.o 00:22:59.250 LIB libspdk_event_accel.a 00:22:59.509 CC module/event/subsystems/bdev/bdev.o 00:22:59.509 LIB libspdk_event_bdev.a 00:22:59.768 CC module/event/subsystems/scsi/scsi.o 00:22:59.768 CC module/event/subsystems/nbd/nbd.o 00:22:59.768 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:22:59.768 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:22:59.768 LIB libspdk_event_nbd.a 00:22:59.768 LIB libspdk_event_scsi.a 00:23:00.026 LIB libspdk_event_nvmf.a 00:23:00.026 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:23:00.026 CC module/event/subsystems/iscsi/iscsi.o 00:23:00.298 LIB libspdk_event_vhost_scsi.a 00:23:00.298 LIB libspdk_event_iscsi.a 00:23:00.298 CXX app/trace/trace.o 00:23:00.298 TEST_HEADER include/spdk/config.h 00:23:00.298 CXX test/cpp_headers/rpc.o 00:23:00.560 CC examples/ioat/perf/perf.o 00:23:00.560 CC examples/accel/perf/accel_perf.o 00:23:00.560 CC test/bdev/bdevio/bdevio.o 00:23:00.560 CC examples/bdev/hello_world/hello_bdev.o 00:23:00.560 CC test/accel/dif/dif.o 00:23:00.560 CC test/app/bdev_svc/bdev_svc.o 00:23:00.560 CC test/blobfs/mkfs/mkfs.o 00:23:00.560 CC examples/blob/hello_world/hello_blob.o 00:23:00.560 CXX test/cpp_headers/vfio_user_spec.o 00:23:00.560 LINK ioat_perf 00:23:00.560 LINK bdev_svc 00:23:00.560 LINK hello_bdev 00:23:00.560 CXX test/cpp_headers/accel_module.o 00:23:00.560 LINK spdk_trace 00:23:00.560 LINK mkfs 00:23:00.560 LINK dif 00:23:00.560 LINK accel_perf 00:23:00.560 LINK bdevio 00:23:00.819 LINK hello_blob 00:23:00.819 CXX test/cpp_headers/bit_pool.o 00:23:00.819 CXX test/cpp_headers/ioat.o 00:23:01.078 CXX test/cpp_headers/blobfs.o 00:23:01.337 CXX test/cpp_headers/pipe.o 00:23:01.597 CXX test/cpp_headers/accel.o 00:23:02.165 CXX test/cpp_headers/version.o 00:23:02.424 CXX test/cpp_headers/trace_parser.o 00:23:02.683 CXX test/cpp_headers/opal_spec.o 00:23:03.252 CXX test/cpp_headers/uuid.o 00:23:03.819 CXX test/cpp_headers/bdev.o 00:23:04.428 CXX test/cpp_headers/hexlify.o 00:23:05.365 CXX test/cpp_headers/likely.o 00:23:05.933 CXX test/cpp_headers/vhost.o 00:23:06.501 CC app/trace_record/trace_record.o 00:23:07.069 CXX test/cpp_headers/memory.o 00:23:08.024 LINK spdk_trace_record 00:23:08.284 CXX test/cpp_headers/vfio_user_pci.o 00:23:09.660 CXX test/cpp_headers/dma.o 00:23:11.037 CXX test/cpp_headers/nbd.o 00:23:11.296 CXX test/cpp_headers/env.o 00:23:12.233 CXX test/cpp_headers/nvme_zns.o 00:23:13.618 CXX test/cpp_headers/env_dpdk.o 00:23:15.023 CXX test/cpp_headers/init.o 00:23:16.404 CXX test/cpp_headers/fd_group.o 00:23:17.341 CXX test/cpp_headers/bdev_module.o 00:23:18.719 CXX test/cpp_headers/opal.o 00:23:18.977 CC examples/ioat/verify/verify.o 00:23:19.914 CXX test/cpp_headers/event.o 00:23:20.173 LINK verify 00:23:21.111 CXX test/cpp_headers/base64.o 00:23:22.489 CXX test/cpp_headers/nvmf.o 00:23:23.426 CXX test/cpp_headers/nvmf_spec.o 00:23:24.804 CXX test/cpp_headers/blobfs_bdev.o 00:23:26.181 CXX test/cpp_headers/fd.o 00:23:27.115 CXX test/cpp_headers/barrier.o 00:23:28.081 CXX test/cpp_headers/nvmf_fc_spec.o 00:23:29.457 CXX test/cpp_headers/zipf.o 00:23:29.716 CC app/nvmf_tgt/nvmf_main.o 00:23:30.286 CXX test/cpp_headers/scheduler.o 00:23:30.854 LINK nvmf_tgt 00:23:31.791 CXX test/cpp_headers/dif.o 00:23:32.784 CC examples/blob/cli/blobcli.o 00:23:32.784 CXX test/cpp_headers/scsi_spec.o 00:23:34.167 CXX test/cpp_headers/blob.o 00:23:34.425 LINK blobcli 00:23:35.001 CXX test/cpp_headers/cpuset.o 00:23:36.379 CXX test/cpp_headers/thread.o 00:23:37.313 CXX test/cpp_headers/tree.o 00:23:37.572 CXX test/cpp_headers/xor.o 00:23:38.948 CXX test/cpp_headers/assert.o 00:23:39.884 CXX test/cpp_headers/file.o 00:23:40.822 CXX test/cpp_headers/endian.o 00:23:41.836 CXX test/cpp_headers/notify.o 00:23:42.788 CXX test/cpp_headers/util.o 00:23:43.726 CXX test/cpp_headers/log.o 00:23:43.985 CXX test/cpp_headers/sock.o 00:23:44.552 CXX test/cpp_headers/nvme_ocssd_spec.o 00:23:45.929 CXX test/cpp_headers/config.o 00:23:46.188 CXX test/cpp_headers/histogram_data.o 00:23:47.136 CC examples/nvme/hello_world/hello_world.o 00:23:47.395 CXX test/cpp_headers/nvme_intel.o 00:23:48.771 CXX test/cpp_headers/idxd_spec.o 00:23:48.771 LINK hello_world 00:23:49.729 CXX test/cpp_headers/crc16.o 00:23:51.108 CXX test/cpp_headers/bdev_zone.o 00:23:52.485 CXX test/cpp_headers/stdinc.o 00:23:53.862 CXX test/cpp_headers/vmd.o 00:23:54.801 CXX test/cpp_headers/scsi.o 00:23:56.183 CXX test/cpp_headers/jsonrpc.o 00:23:57.561 CXX test/cpp_headers/blob_bdev.o 00:23:59.465 CXX test/cpp_headers/crc32.o 00:24:00.846 CXX test/cpp_headers/nvmf_transport.o 00:24:02.224 CXX test/cpp_headers/idxd.o 00:24:03.603 CXX test/cpp_headers/crc64.o 00:24:04.171 CXX test/cpp_headers/nvme.o 00:24:05.549 CXX test/cpp_headers/iscsi_spec.o 00:24:06.118 CXX test/cpp_headers/queue.o 00:24:06.686 CXX test/cpp_headers/nvmf_cmd.o 00:24:07.623 CXX test/cpp_headers/lvol.o 00:24:08.191 CXX test/cpp_headers/ftl.o 00:24:08.759 CXX test/cpp_headers/trace.o 00:24:09.703 CXX test/cpp_headers/ioat_spec.o 00:24:09.961 CC examples/bdev/bdevperf/bdevperf.o 00:24:10.218 CXX test/cpp_headers/conf.o 00:24:10.785 CXX test/cpp_headers/ublk.o 00:24:11.351 CC test/dma/test_dma/test_dma.o 00:24:11.351 LINK bdevperf 00:24:11.610 CXX test/cpp_headers/bit_array.o 00:24:12.178 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:24:12.178 CXX test/cpp_headers/pci_ids.o 00:24:12.178 LINK test_dma 00:24:12.437 CXX test/cpp_headers/nvme_spec.o 00:24:12.698 CC test/app/histogram_perf/histogram_perf.o 00:24:12.698 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:24:12.698 LINK nvme_fuzz 00:24:13.267 CXX test/cpp_headers/string.o 00:24:13.267 LINK histogram_perf 00:24:13.267 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:24:13.835 CXX test/cpp_headers/gpt_spec.o 00:24:13.835 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:24:14.401 CXX test/cpp_headers/nvme_ocssd.o 00:24:14.660 LINK iscsi_fuzz 00:24:14.660 LINK vhost_fuzz 00:24:14.918 CXX test/cpp_headers/json.o 00:24:15.484 CXX test/cpp_headers/reduce.o 00:24:16.051 CXX test/cpp_headers/mmio.o 00:24:17.989 CC examples/nvme/reconnect/reconnect.o 00:24:17.989 CC app/iscsi_tgt/iscsi_tgt.o 00:24:18.922 LINK iscsi_tgt 00:24:18.922 LINK reconnect 00:24:37.003 CC test/app/jsoncat/jsoncat.o 00:24:37.003 LINK jsoncat 00:24:51.886 CC examples/sock/hello_world/hello_sock.o 00:24:51.886 LINK hello_sock 00:24:51.886 CC test/app/stub/stub.o 00:24:52.826 LINK stub 00:24:52.826 CC test/env/vtophys/vtophys.o 00:24:52.826 CC test/env/mem_callbacks/mem_callbacks.o 00:24:53.394 LINK vtophys 00:24:54.331 LINK mem_callbacks 00:24:56.868 CC test/event/event_perf/event_perf.o 00:24:57.437 LINK event_perf 00:24:58.442 CC examples/nvme/nvme_manage/nvme_manage.o 00:24:58.442 CC test/lvol/esnap/esnap.o 00:25:00.345 LINK nvme_manage 00:25:08.468 LINK esnap 00:25:18.444 CC test/event/reactor/reactor.o 00:25:18.444 LINK reactor 00:25:21.015 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:25:21.953 LINK env_dpdk_post_init 00:25:23.859 CC test/event/reactor_perf/reactor_perf.o 00:25:24.427 LINK reactor_perf 00:25:24.992 CC test/nvme/aer/aer.o 00:25:25.927 LINK aer 00:25:26.185 CC examples/vmd/lsvmd/lsvmd.o 00:25:26.751 LINK lsvmd 00:25:29.282 CC examples/nvmf/nvmf/nvmf.o 00:25:29.282 CC test/nvme/reset/reset.o 00:25:29.848 LINK nvmf 00:25:30.106 CC test/env/memory/memory_ut.o 00:25:30.106 LINK reset 00:25:32.645 LINK memory_ut 00:25:35.931 CC examples/nvme/arbitration/arbitration.o 00:25:37.306 LINK arbitration 00:25:38.339 CC test/env/pci/pci_ut.o 00:25:39.282 LINK pci_ut 00:25:42.573 CC app/spdk_tgt/spdk_tgt.o 00:25:43.223 LINK spdk_tgt 00:25:43.792 CC test/event/app_repeat/app_repeat.o 00:25:44.729 LINK app_repeat 00:25:51.336 CC examples/nvme/hotplug/hotplug.o 00:25:51.336 CC test/event/scheduler/scheduler.o 00:25:51.904 LINK hotplug 00:25:51.904 LINK scheduler 00:25:54.439 CC test/nvme/sgl/sgl.o 00:25:55.374 LINK sgl 00:25:56.310 CC test/rpc_client/rpc_client_test.o 00:25:56.310 CC examples/vmd/led/led.o 00:25:56.906 LINK rpc_client_test 00:25:57.166 LINK led 00:26:00.456 CC test/nvme/e2edp/nvme_dp.o 00:26:01.024 LINK nvme_dp 00:26:09.147 CC examples/nvme/cmb_copy/cmb_copy.o 00:26:09.406 LINK cmb_copy 00:26:21.614 CC test/thread/poller_perf/poller_perf.o 00:26:21.615 LINK poller_perf 00:26:23.519 CC examples/nvme/abort/abort.o 00:26:24.455 LINK abort 00:26:24.455 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:26:25.392 LINK pmr_persistence 00:26:35.373 CC test/nvme/overhead/overhead.o 00:26:35.373 LINK overhead 00:26:35.373 CC examples/util/zipf/zipf.o 00:26:35.941 LINK zipf 00:26:37.843 CC test/nvme/err_injection/err_injection.o 00:26:38.411 LINK err_injection 00:26:46.531 CC app/spdk_lspci/spdk_lspci.o 00:26:47.097 LINK spdk_lspci 00:26:47.663 CC examples/thread/thread/thread_ex.o 00:26:49.038 LINK thread 00:26:51.569 CC test/thread/lock/spdk_lock.o 00:26:53.470 LINK spdk_lock 00:26:57.661 CC examples/idxd/perf/perf.o 00:26:58.228 CC app/spdk_nvme_perf/perf.o 00:26:58.487 LINK idxd_perf 00:26:59.422 CC examples/interrupt_tgt/interrupt_tgt.o 00:26:59.680 LINK spdk_nvme_perf 00:27:00.247 LINK interrupt_tgt 00:27:03.529 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:27:03.788 LINK histogram_ut 00:27:06.317 CC test/nvme/startup/startup.o 00:27:07.254 LINK startup 00:27:08.189 CC test/nvme/reserve/reserve.o 00:27:08.756 CC test/unit/lib/accel/accel.c/accel_ut.o 00:27:09.015 LINK reserve 00:27:10.920 CC app/spdk_nvme_identify/identify.o 00:27:11.855 LINK accel_ut 00:27:12.423 LINK spdk_nvme_identify 00:27:13.358 CC app/spdk_nvme_discover/discovery_aer.o 00:27:14.342 LINK spdk_nvme_discover 00:27:18.534 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:27:19.102 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:27:21.006 LINK blob_bdev_ut 00:27:27.644 LINK bdev_ut 00:27:27.644 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:27:28.212 LINK tree_ut 00:27:30.118 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:27:32.023 CC test/unit/lib/blob/blob.c/blob_ut.o 00:27:32.023 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:27:32.591 LINK blobfs_async_ut 00:27:35.125 LINK blobfs_sync_ut 00:27:37.787 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:27:38.046 LINK blobfs_bdev_ut 00:27:38.615 CC test/nvme/simple_copy/simple_copy.o 00:27:39.183 LINK simple_copy 00:27:39.183 LINK blob_ut 00:27:39.753 CC test/nvme/connect_stress/connect_stress.o 00:27:39.753 CC test/unit/lib/bdev/part.c/part_ut.o 00:27:40.012 LINK connect_stress 00:27:40.012 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:27:40.271 CC test/nvme/boot_partition/boot_partition.o 00:27:40.271 LINK scsi_nvme_ut 00:27:40.531 LINK boot_partition 00:27:41.099 CC app/spdk_top/spdk_top.o 00:27:41.667 LINK part_ut 00:27:41.667 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:27:41.667 LINK spdk_top 00:27:41.925 CC test/nvme/compliance/nvme_compliance.o 00:27:42.184 LINK gpt_ut 00:27:42.443 LINK nvme_compliance 00:27:42.702 CC test/unit/lib/event/app.c/app_ut.o 00:27:42.963 CC test/unit/lib/dma/dma.c/dma_ut.o 00:27:43.900 LINK dma_ut 00:27:43.900 LINK app_ut 00:27:46.443 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:27:47.387 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:27:48.321 LINK reactor_ut 00:27:48.892 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:27:49.467 LINK vbdev_lvol_ut 00:27:50.033 LINK ioat_ut 00:27:50.984 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:27:51.552 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:27:54.084 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:27:54.084 LINK bdev_raid_ut 00:27:54.653 LINK bdev_ut 00:27:55.225 LINK conn_ut 00:27:55.793 CC test/nvme/fused_ordering/fused_ordering.o 00:27:56.363 LINK fused_ordering 00:27:56.621 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:27:57.189 LINK init_grp_ut 00:27:57.448 CC app/vhost/vhost.o 00:27:58.015 LINK vhost 00:27:58.583 CC app/spdk_dd/spdk_dd.o 00:27:58.841 CC test/nvme/doorbell_aers/doorbell_aers.o 00:27:58.841 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:27:58.841 LINK doorbell_aers 00:27:59.101 LINK spdk_dd 00:27:59.360 CC test/unit/lib/iscsi/param.c/param_ut.o 00:27:59.619 CC test/nvme/fdp/fdp.o 00:27:59.619 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:27:59.619 LINK iscsi_ut 00:27:59.619 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:27:59.879 LINK param_ut 00:28:00.139 LINK fdp 00:28:00.139 LINK bdev_raid_sb_ut 00:28:00.399 LINK portal_grp_ut 00:28:01.778 CC app/fio/nvme/fio_plugin.o 00:28:02.716 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:28:02.716 LINK spdk_nvme 00:28:03.284 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:28:03.853 LINK concat_ut 00:28:05.267 LINK tgt_node_ut 00:28:06.205 CC app/fio/bdev/fio_plugin.o 00:28:08.113 LINK spdk_bdev 00:28:08.718 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:28:10.095 LINK raid1_ut 00:28:11.473 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:28:12.410 LINK bdev_zone_ut 00:28:12.979 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:28:14.884 LINK vbdev_zone_block_ut 00:28:15.822 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:28:16.759 CC test/nvme/cuse/cuse.o 00:28:19.300 LINK cuse 00:28:20.676 LINK bdev_nvme_ut 00:28:22.070 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:28:22.358 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:28:23.293 LINK jsonrpc_server_ut 00:28:24.230 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:28:24.488 CC test/unit/lib/log/log.c/log_ut.o 00:28:24.488 LINK json_parse_ut 00:28:25.056 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:28:25.315 LINK log_ut 00:28:25.883 LINK json_util_ut 00:28:28.416 LINK lvol_ut 00:28:30.322 In function '_mm256_storeu_si256', 00:28:30.322 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:28:30.322 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:28:30.322 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:30.322 928 | *__P = __A; 00:28:30.322 | ^ 00:28:30.322 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:28:30.322 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:28:30.323 156 | uint8_t driver_priv_data[0]; 00:28:30.323 | ^ 00:28:30.323 In function '_mm_storeu_si128', 00:28:30.323 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:30.323 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:28:30.323 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:30.323 727 | *__P = __B; 00:28:30.323 | ^ 00:28:30.323 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:28:30.323 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:28:30.323 156 | uint8_t driver_priv_data[0]; 00:28:30.323 | ^ 00:28:30.323 In function '_mm_storeu_si128', 00:28:30.323 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:30.323 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:28:30.323 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:30.323 727 | *__P = __B; 00:28:30.323 | ^ 00:28:30.323 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:28:30.323 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:28:30.323 156 | uint8_t driver_priv_data[0]; 00:28:30.323 | ^ 00:28:30.323 In function '_mm256_storeu_si256', 00:28:30.323 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:28:30.323 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10: 00:28:30.323 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:30.323 928 | *__P = __A; 00:28:30.323 | ^ 00:28:30.323 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:28:30.323 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:28:30.323 156 | uint8_t driver_priv_data[0]; 00:28:30.323 | ^ 00:28:30.323 In function '_mm_storeu_si128', 00:28:30.323 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:30.323 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10: 00:28:30.323 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:30.323 727 | *__P = __B; 00:28:30.323 | ^ 00:28:30.323 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:28:30.323 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:28:30.323 156 | uint8_t driver_priv_data[0]; 00:28:30.323 | ^ 00:28:31.264 CC test/unit/lib/notify/notify.c/notify_ut.o 00:28:32.643 LINK notify_ut 00:28:32.902 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:28:32.902 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:28:34.812 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:28:36.225 LINK nvme_ut 00:28:36.501 LINK json_write_ut 00:28:36.764 LINK tcp_ut 00:28:37.022 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:28:38.398 LINK dev_ut 00:28:38.656 CC test/unit/lib/sock/sock.c/sock_ut.o 00:28:41.957 LINK sock_ut 00:28:43.333 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:28:43.333 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:28:44.709 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:28:46.101 In function '_mm256_storeu_si256', 00:28:46.101 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:28:46.101 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:28:46.101 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:28:46.101 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:46.101 928 | *__P = __A; 00:28:46.101 | ^ 00:28:46.101 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:28:46.101 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:28:46.101 91 | uint8_t hash_key[0]; 00:28:46.101 | ^ 00:28:46.101 In function '_mm_storeu_si128', 00:28:46.101 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:46.101 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:28:46.101 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:28:46.101 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:46.101 727 | *__P = __B; 00:28:46.101 | ^ 00:28:46.101 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:28:46.101 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:28:46.101 91 | uint8_t hash_key[0]; 00:28:46.101 | ^ 00:28:46.101 In function '_mm_storeu_si128', 00:28:46.101 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:46.101 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:28:46.101 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:28:46.102 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:46.102 727 | *__P = __B; 00:28:46.102 | ^ 00:28:46.102 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:28:46.102 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:28:46.102 91 | uint8_t hash_key[0]; 00:28:46.102 | ^ 00:28:46.102 In function '_mm256_storeu_si256', 00:28:46.102 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:28:46.102 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10, 00:28:46.102 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:28:46.102 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:46.102 928 | *__P = __A; 00:28:46.102 | ^ 00:28:46.102 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:28:46.102 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:28:46.102 91 | uint8_t hash_key[0]; 00:28:46.102 | ^ 00:28:46.102 In function '_mm_storeu_si128', 00:28:46.102 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:46.102 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10, 00:28:46.102 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:28:46.102 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:46.102 727 | *__P = __B; 00:28:46.102 | ^ 00:28:46.102 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:28:46.102 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:28:46.102 91 | uint8_t hash_key[0]; 00:28:46.102 | ^ 00:28:46.102 LINK lun_ut 00:28:46.102 LINK nvme_ctrlr_cmd_ut 00:28:47.040 LINK nvme_ctrlr_ut 00:28:47.300 In function '_mm256_storeu_si256', 00:28:47.300 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:28:47.300 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:28:47.300 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:47.300 928 | *__P = __A; 00:28:47.300 | ^ 00:28:47.300 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:28:47.300 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:28:47.300 156 | uint8_t driver_priv_data[0]; 00:28:47.300 | ^ 00:28:47.300 In function '_mm_storeu_si128', 00:28:47.300 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:47.300 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:28:47.300 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:47.300 727 | *__P = __B; 00:28:47.300 | ^ 00:28:47.300 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:28:47.300 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:28:47.300 156 | uint8_t driver_priv_data[0]; 00:28:47.300 | ^ 00:28:47.300 In function '_mm_storeu_si128', 00:28:47.300 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:47.300 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10: 00:28:47.300 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:47.300 727 | *__P = __B; 00:28:47.300 | ^ 00:28:47.300 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:28:47.300 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:28:47.300 156 | uint8_t driver_priv_data[0]; 00:28:47.300 | ^ 00:28:47.300 In function '_mm256_storeu_si256', 00:28:47.300 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:28:47.301 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10: 00:28:47.301 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:47.301 928 | *__P = __A; 00:28:47.301 | ^ 00:28:47.301 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:28:47.301 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:28:47.301 156 | uint8_t driver_priv_data[0]; 00:28:47.301 | ^ 00:28:47.301 In function '_mm_storeu_si128', 00:28:47.301 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:47.301 inlined from 'rte_cryptodev_sym_session_set_user_data' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10: 00:28:47.301 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:47.301 727 | *__P = __B; 00:28:47.301 | ^ 00:28:47.301 ../../../dpdk/build-tmp/../lib/cryptodev/rte_cryptodev.c: In function 'rte_cryptodev_sym_session_set_user_data': 00:28:47.301 ../../../dpdk/build-tmp/../lib/cryptodev/cryptodev_pmd.h:156:10: note: at offset 0 to object 'driver_priv_data' with size 0 declared here 00:28:47.301 156 | uint8_t driver_priv_data[0]; 00:28:47.301 | ^ 00:28:47.869 CC test/unit/lib/sock/posix.c/posix_ut.o 00:28:48.807 LINK posix_ut 00:28:49.065 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:28:50.973 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:28:50.973 LINK ctrlr_ut 00:28:50.973 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:28:51.541 LINK nvme_ctrlr_ocssd_cmd_ut 00:28:51.541 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:28:51.801 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:28:52.060 LINK scsi_ut 00:28:52.628 LINK nvme_ns_ut 00:28:52.628 CC test/unit/lib/thread/thread.c/thread_ut.o 00:28:53.196 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:28:53.457 LINK nvme_ns_cmd_ut 00:28:54.040 CC test/unit/lib/util/base64.c/base64_ut.o 00:28:54.040 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:28:54.298 LINK thread_ut 00:28:54.298 LINK base64_ut 00:28:54.298 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:28:54.556 LINK nvme_ns_ocssd_cmd_ut 00:28:54.556 LINK scsi_bdev_ut 00:28:54.815 LINK pci_event_ut 00:28:55.423 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:28:55.423 In function '_mm256_storeu_si256', 00:28:55.423 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:28:55.423 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:28:55.423 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:28:55.423 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:55.423 928 | *__P = __A; 00:28:55.423 | ^ 00:28:55.423 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:28:55.423 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:28:55.423 91 | uint8_t hash_key[0]; 00:28:55.423 | ^ 00:28:55.423 In function '_mm_storeu_si128', 00:28:55.423 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:55.423 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:28:55.423 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:28:55.423 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:55.423 727 | *__P = __B; 00:28:55.423 | ^ 00:28:55.423 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:28:55.423 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:28:55.423 91 | uint8_t hash_key[0]; 00:28:55.423 | ^ 00:28:55.423 In function '_mm_storeu_si128', 00:28:55.423 inlined from 'rte_memcpy_generic' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:55.423 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:868:10, 00:28:55.423 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:28:55.423 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:55.423 727 | *__P = __B; 00:28:55.423 | ^ 00:28:55.423 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:28:55.423 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:28:55.423 91 | uint8_t hash_key[0]; 00:28:55.423 | ^ 00:28:55.423 In function '_mm256_storeu_si256', 00:28:55.423 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:347:2, 00:28:55.423 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10, 00:28:55.423 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:28:55.423 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/avxintrin.h:928:8: warning: writing 32 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:55.423 928 | *__P = __A; 00:28:55.423 | ^ 00:28:55.423 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:28:55.423 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:28:55.423 91 | uint8_t hash_key[0]; 00:28:55.423 | ^ 00:28:55.423 In function '_mm_storeu_si128', 00:28:55.423 inlined from 'rte_memcpy_aligned' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:334:2, 00:28:55.423 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/eal/x86/include/rte_memcpy.h:866:10, 00:28:55.423 inlined from 'rte_thash_init_ctx' at ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:211:1: 00:28:55.423 /opt/rh/devtoolset-10/root/usr/lib/gcc/x86_64-redhat-linux/10/include/emmintrin.h:727:8: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=] 00:28:55.423 727 | *__P = __B; 00:28:55.423 | ^ 00:28:55.423 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c: In function 'rte_thash_init_ctx': 00:28:55.423 ../../../dpdk/build-tmp/../lib/hash/rte_thash.c:91:11: note: at offset 0 to object 'hash_key' with size 0 declared here 00:28:55.423 91 | uint8_t hash_key[0]; 00:28:55.423 | ^ 00:28:56.359 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:28:56.927 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:28:56.927 LINK bit_array_ut 00:28:56.927 LINK nvme_pcie_ut 00:28:57.494 LINK subsystem_ut 00:28:57.494 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:28:58.062 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:28:58.321 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:28:58.321 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:28:58.580 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:28:58.580 LINK scsi_pr_ut 00:28:58.840 LINK subsystem_ut 00:28:58.840 LINK iobuf_ut 00:28:58.840 LINK cpuset_ut 00:28:58.840 LINK crc16_ut 00:28:59.776 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:29:00.714 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:29:00.714 LINK crc32_ieee_ut 00:29:00.973 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:29:01.232 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:29:01.232 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:29:01.804 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:29:01.804 LINK crc32c_ut 00:29:01.804 LINK rpc_ut 00:29:01.804 LINK nvme_poll_group_ut 00:29:02.064 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:29:02.324 LINK idxd_user_ut 00:29:02.585 LINK nvme_qpair_ut 00:29:02.585 LINK idxd_ut 00:29:02.843 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:29:03.102 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:29:03.102 CC test/unit/lib/util/dif.c/dif_ut.o 00:29:03.102 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:29:03.670 LINK nvme_quirks_ut 00:29:03.670 LINK crc64_ut 00:29:03.928 LINK ctrlr_discovery_ut 00:29:04.187 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:29:04.187 LINK dif_ut 00:29:04.187 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:29:04.445 CC test/unit/lib/util/iov.c/iov_ut.o 00:29:04.445 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:29:04.445 LINK iov_ut 00:29:04.704 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:29:04.705 LINK ctrlr_bdev_ut 00:29:04.964 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:29:05.285 LINK vhost_ut 00:29:05.285 LINK nvme_tcp_ut 00:29:05.543 LINK nvme_transport_ut 00:29:05.802 LINK nvme_io_msg_ut 00:29:06.369 CC test/unit/lib/util/math.c/math_ut.o 00:29:06.629 LINK math_ut 00:29:06.629 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:29:06.888 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:29:07.147 LINK nvmf_ut 00:29:07.406 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:29:07.974 CC test/unit/lib/util/string.c/string_ut.o 00:29:07.974 CC test/unit/lib/rdma/common.c/common_ut.o 00:29:07.974 LINK pipe_ut 00:29:07.974 LINK nvme_pcie_common_ut 00:29:07.974 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:29:08.232 LINK string_ut 00:29:08.232 LINK common_ut 00:29:08.801 LINK rdma_ut 00:29:09.060 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:29:09.060 CC test/unit/lib/util/xor.c/xor_ut.o 00:29:09.318 LINK xor_ut 00:29:09.318 LINK ftl_l2p_ut 00:29:09.577 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:29:09.577 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:29:10.146 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:29:10.146 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:29:10.405 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:29:10.405 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:29:10.405 LINK nvme_opal_ut 00:29:10.405 LINK nvme_fabric_ut 00:29:10.405 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:29:10.664 LINK transport_ut 00:29:10.664 LINK ftl_band_ut 00:29:11.233 LINK ftl_io_ut 00:29:11.233 LINK nvme_cuse_ut 00:29:11.492 LINK nvme_rdma_ut 00:29:11.751 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:29:11.751 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:29:12.010 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:29:12.010 LINK ftl_bitmap_ut 00:29:12.010 LINK ftl_mempool_ut 00:29:12.268 LINK ftl_mngt_ut 00:29:12.834 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:29:12.834 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:29:13.400 LINK ftl_sb_ut 00:29:13.400 LINK ftl_layout_upgrade_ut 00:29:17.592 20:56:00 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:29:17.592 make[1]: Nothing to be done for 'clean'. 00:29:20.883 20:56:04 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:29:20.883 20:56:04 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:29:20.883 20:56:04 -- common/autotest_common.sh@10 -- $ set +x 00:29:20.883 20:56:04 -- spdk/autopackage.sh@48 -- $ timing_finish 00:29:20.883 20:56:04 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:20.883 20:56:04 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:20.883 20:56:04 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:20.883 + [[ -n 2913 ]] 00:29:20.883 + sudo kill 2913 00:29:20.883 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:29:21.151 [Pipeline] } 00:29:21.171 [Pipeline] // timeout 00:29:21.177 [Pipeline] } 00:29:21.194 [Pipeline] // stage 00:29:21.199 [Pipeline] } 00:29:21.217 [Pipeline] // catchError 00:29:21.226 [Pipeline] stage 00:29:21.229 [Pipeline] { (Stop VM) 00:29:21.243 [Pipeline] sh 00:29:21.525 + vagrant halt 00:29:24.860 ==> default: Halting domain... 00:29:30.133 [Pipeline] sh 00:29:30.413 + vagrant destroy -f 00:29:33.718 ==> default: Removing domain... 00:29:33.989 [Pipeline] sh 00:29:34.272 + mv output /var/jenkins/workspace/centos7-vg-autotest/output 00:29:34.281 [Pipeline] } 00:29:34.298 [Pipeline] // stage 00:29:34.303 [Pipeline] } 00:29:34.321 [Pipeline] // dir 00:29:34.326 [Pipeline] } 00:29:34.342 [Pipeline] // wrap 00:29:34.348 [Pipeline] } 00:29:34.363 [Pipeline] // catchError 00:29:34.373 [Pipeline] stage 00:29:34.375 [Pipeline] { (Epilogue) 00:29:34.389 [Pipeline] sh 00:29:34.729 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:52.889 [Pipeline] catchError 00:29:52.891 [Pipeline] { 00:29:52.906 [Pipeline] sh 00:29:53.187 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:53.187 Artifacts sizes are good 00:29:53.196 [Pipeline] } 00:29:53.213 [Pipeline] // catchError 00:29:53.224 [Pipeline] archiveArtifacts 00:29:53.231 Archiving artifacts 00:29:53.517 [Pipeline] cleanWs 00:29:53.528 [WS-CLEANUP] Deleting project workspace... 00:29:53.528 [WS-CLEANUP] Deferred wipeout is used... 00:29:53.533 [WS-CLEANUP] done 00:29:53.535 [Pipeline] } 00:29:53.554 [Pipeline] // stage 00:29:53.560 [Pipeline] } 00:29:53.577 [Pipeline] // node 00:29:53.583 [Pipeline] End of Pipeline 00:29:53.625 Finished: SUCCESS